Student Council: Students Perspective on AI and the Future of Learning

Daniel Emmerson 00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world. 

Today's episode is very special in that we have four incredible guests with us. We have Conrado, Kerem, Feli and Vicky. Welcome everybody to Foundational Impact. It's wonderful to have all of you with us today. For the first time, we have more than one guest on a podcast episode, which I'm very excited about. I'm particularly excited to have this group today with us because we have representatives from our Good Future Foundation Student Council. And so that's where I'd like to start. 

Feli, if I could come to you first of all and just ask, what is the Student Council at Good Future foundation and why do you think it's important?

Felicitas 01:16

Hello, Daniel. Thank you very much for the question. The Student Council is a group of young people all around the world whose main purpose is to contribute positively in this use of artificial intelligence in schools all around the world. And I think that it is very important because we are a group from different cultures, from different perspectives, and we all have a point and a say in this main issue that is how we implement artificial intelligence in schools.

Daniel Emmerson 01:51

Kerem, if I can come to you, I'm keen to understand a little bit more about why it's important for students currently at school, for the teachers at those schools to have competency and confidence in working with artificial intelligence. Why do you think that's so important today?

Kerem 02:10

That's a great question. And the importance of using AI in classrooms is mainly because of the opportunities it provides. Because when you have the ability to control the use of AI, you can reach out to many information on the Internet with like one comment that you write on the program. So in that sense, it allows for people from all around the world to access this huge database that humans already been collecting over the years.

Daniel Emmerson 02:46

Is that a little bit scary, Kerem, in terms of where we are at the moment, it seems we've got here rather quickly.

Kerem 02:53

I mean, it seems scary, but it has been the way the world is working. Like when the computers first arrived, they were also scary. But over time we get used to them and now they are an essential part of our lives. So I think that the use of AI will also be like it's going to evolve in the same direction.

Daniel Emmerson 03:14

Thanks Kerem. Conra, if I can come to you, I'm wondering about that use of artificial intelligence from a student's perspective. Could you tell us about some of the ways that you're using AI to study?

Conrado 03:28

Right. So I think that AI can be a very useful way to, for example, create flashcards, you know, in some way and to study in a way in which it is more interactive rather than alone. And that's a really great way to in some way learn self paced, but at the same time not really alone, you know. And with regards studying, I particularly discovered that AI is also a really good tool to learn languages, for example, like asking them to give you exercises or even talking to an AI learning tool in which you can talk to it with any subject you want and you'll receive a great response and directly speaking and also listening.

Daniel Emmerson 04:13

Vicky, do you think there are some risks here with what Conra is saying about how students might be using AI?

Victoria 04:20

Yeah, for sure. I believe AI is such a powerful tool that I don't think all students are equipped with the right education or the right information on how to navigate this new tool. So there's a lot of ignorance around it. And I believe maybe some students fall into an over reliance on AI, maybe using ChatGPT, for example, as the main source of AI that is also really accessible for them. They can just rely without even thinking twice about what the chat is responding back. And there's also the issue about isolating and not looking for that help in other students, classmates or teachers, like people maybe used to do in the past.

Daniel Emmerson 05:06

Can you tell me a bit more, Vicky, maybe about that over dependence? How can that happen? And is that something that you can see happening maybe with classmates?

Victoria 05:14

Yeah, I've seen it around a lot. That's why I'm extremely concerned about it also in students. I believe it's really easy to fall into reliance with AI just because it's easy and it simplifies a lot of tasks that are repetitive or time consuming for us. So it's easy for us to give those tasks maybe to the chat and have it done in five minutes and something that could have taken us hours to solve. For example, the other day I was talking to one of my best friends. He is studying in mechanical engineering and he has a subject that is basically around programming. Don't ask me about it because I do not know, but he was telling me how he uses ChatGPT to an extent that he also starts to doubt how much he can actually do programming by himself. You know, he says that if in the future he has a job that actually has to do with programming, he doesn't think he's well equipped to deal with that situation.

Daniel Emmerson 06:24

Feli, thinking about Vicky's example just there, what do you think would be best practice for a teacher maybe who was aware of how much their students were using these tools?

Felicitas 06:36

Well, I really understand Vicky's friend's experience because it happened to me. I'm studying business administration and I have lots of subjects that are related to programming. And this semester I had to program different codes. And it happened to me that when I was working with other friends there were several doubts we had and we looked in ChatGPT and many times the solutions that the chat gave us were not the ones we should use. They were even more complicated. So I believe that the best practice for teachers should be not to eliminate ChatGPT because there were some aspects in which it was useful to make use of this artificial intelligence tool. But I think that the main challenge for teachers is to integrate this tool with exams or with practices. But I don't think that it should be eliminated from our curriculum. For example, I think that we should learn how to use them properly and when to rely and not rely on these types of tools.

Daniel Emmerson 07:38

That's really interesting. So you're saying that you used GPT as part of your studies and then you went back to check the output and the output didn't match your expectations?

Felicitas 07:50

Yes.

Daniel Emmerson 07:51

It's a bit of a strange question, but how did you feel going through that process when you were challenging the output of the AI?

Felicitas 07:58

Many times we felt like we were doing twice the work we had to do because we thought that by asking ChatGPT we were in some way or another gaining time. But it was quite the opposite because when we went to check if the code was correct, it wasn't. It used some structures that were not the ones we should use. So in the end it was not that useful in terms of, for example, styles. It really helped us because we did not lose time looking for specific styles we could say, but when we went to the structure of coding, it was not the best, the worst solution.

Daniel Emmerson 08:40

Kerem, just to come back to you, and adding to Feli's point, I suppose checking the output of an AI is something that we always recommend that teachers and students alike do. But that can sometimes feel quite counterintuitive because the format that the output of an AI usually gives you, if it's text based, feels quite authoritative. Like, it feels like it knows what it's giving you, right? This information is correct and this is how you should act accordingly. Have you ever felt like it's difficult to challenge what the AI is giving you in terms of information? And how might teachers better structure their lessons or their instruction around best practice to give students the confidence to challenge that output?

Kerem 09:31

So I recently had experience with ChatGPT that just matched your question. So I don't want to get just lost in the details, but basically I was like, is statement A or statement B is correct? I asked that to ChatGPT and it responded that statement A is correct because blah blah, blah, blah blah blah. But it didn't feel quite right. So I was like, well, don't you think it is also like, in this sense statement A is inaccurate? And then ChatGPT was like, well, yeah, you might be right. Then statement B is correct. But now that ChatGPT changed its mind because of my prompts, I wasn't still sure if it is just like ChatGPT changed its mind because it was the correct thing to do or just to please the user. So in that sense, even challenging ChatGPT can create some questionable environments. So therefore, even though we can use ChatGPT to help with homework, exams, assessments, etc. Students should always be double checking that information which showcases their knowledge in the area as well.

Daniel Emmerson 10:44

And when that happened to you, when it gave you the alternative response, did you end up using what it gave you or did you scrap GPT in this case and use something else?

Kerem 10:55

I just reached out to my peers and like, what did you do in that case? And like took their advice on it instead of ChatGPT.

Daniel Emmerson 11:03

That's interesting. So we're seeing almost in some cases a move away from the technology back to the peer group. Vicky, if I can come to you next and just ask if you have any experiences with particular AI tools that aren't just text, where you're also looking maybe at music or video or image generation.

Victoria 11:27

I don't really have a lot of experience in that. I've had like some friends on our WhatsApp group play a lot with generative AI that creates images. I don't usually participate on that, but it's wonderful and sometimes it blows my mind to see how like they give a simple prompt, but a crazy prompt that can even be real on real life and it generates that image. Or I've seen a lot on social media a lot of images floating around that are generated by AI and how it can actually lead to misinformation and they can be extremely harmful for some people. And also people that are not educated on it can believe that they are true. So the harm that they can make is huge.

Daniel Emmerson 12:21

Can we dig a little deeper into that? Why do you think that they're so harmful potentially?

Victoria 12:27

Well, right now I'm thinking of an example for that. A few months ago, I don't know if you were aware, but a Taylor Swift fan, it happened that a lot of haters started creating images of Taylor in a bad setting, like abusive settings. And they included explicit images of her, for example, being assaulted, images that are really harmful. And we're talking about she has a fan base that primarily is based on little kids. And if those images can get to them, it can get to anyone. So we're talking about a tool that is extremely powerful. And I don't know if we are all really educated on the impact that it can have and the consequences that it can also lead to.

Daniel Emmerson 13:15

That's a great point, Vicky. Conra, if I can come to you. So thinking about the amount of access that students might have, if not to image generating AI tools, but to the products of image generating tools, as Vicky mentioned, for example, with the Taylor Swift images, is there a way that students might be better educated on what's real, what's generated by an AI? And is that important, do you think?

Conrado 13:45

Well, clearly that is hugely important with regards to AI, as Vicky mentioned. I mean, it could be really harmful not only for kids looking at those images, but also maybe new forms of, for example, bullying at schools, you know, in which they can create images of someone who is not aware of with images of them and then create a photo that is not real. But at the same time, I think that it's really difficult to, you know, in some way know the truth or even if it's not, because at the same time we know that it's not the absolute truth. But how do we get to know the source? I think that we've not been so much advanced in that way. But I know that fact checking is also something that tends to be really difficult normally with false information, for example, misinformation. But I think it's the way to say, well, you need to be, with regards social media or what you look at the Internet, really cautious and not believe everything that you see. But at the same time, what to believe and what to not believe. It's a question of a kind of philosophical question as well. And in some way I think that doesn't have a solution. But at the same time, it can be quite harmful when we think about, well, people maybe would believe something because they believed it in the past, you know, and that could generate like more harm, you know. So it's about being quite cautious and, you know, not trying to get, you know, or to see something as you as a truth. But it can be quite detrimental. And I don't think that there's a way in which in the present we can dissociate from the absolute truth on what is real and what is not.

Daniel Emmerson 15:43

So exercising an additional degree of investigation maybe, or fact checking, as you were talking about, Conrado, as to where an image or a piece of text might have come from, which is of course very important for anyone looking at new content, regardless of the source. Feli, if I can come to you, based on what Conra and Vicky have just told us about the image generating content from AI, but also thinking about other forms of misinformation or disinformation, what might you say would be best practice for teachers when it comes to explaining this or explaining best practice in these examples for students?

Felicitas 16:26

Well, that's a great question because I think that no one has yet discovered the most effective way to implement this. But I think that a good way could be to let students experience both situations. Like it happened to me, and I always mention it in a subject that we had to elaborate a code. And our teacher was very strong on not using AI. And we had one class in which he said, okay, we have this problem. We will divide the class in three groups. One group would use ChatGPT to look for the solution, the other one will elaborate it themselves. And the third group will have to realize which code was made by students and which one by ChatGPT. And I think it was a great practice because we all learned how the solution that ChatGPT gave us was not the best one. And at least for me, it was a good experience because it, it made me realize, okay, we should not always trust ChatGPT. So I think that that would be a great practice, like letting students experience both situations, but also not making us students feel like we were not doing something the right way. Because sometimes teachers create this sense of fear of using ChatGPT and you will be penalized. But I think that the approach should be different and in a way of learning from this experience as well.

Daniel Emmerson 17:57

Where do you think that fear comes from most?

Felicitas 18:00

Probably because students do not, and I include myself sometimes do not realize the impact that using AI in our everyday activities will have in a long term. So that's why teachers always make us feel that we are not doing something the right way or that we will be punished if we do so, so that we just don't even try it. But I don't think that is the best way for us to really understand why it is not the best practice.

Daniel Emmerson 18:32

Kerem, as someone with an interest in AI in education, how important do you think it is for students to have access to AI tools in preparation for the future of work?

Kerem 18:46

Although I think that's like education can continue without the presence of AI, I would say that having AI tools accessible makes it more easy for a student to reach out to the like readily available information so that they can take them and build upon it. So like I think in terms of creating a basis, using AI in classroom environments has importance.

Daniel Emmerson 19:17

Conra, if I could bring you in on this, is that something you'd agree with? Do you say that use of AI, for example in school is good preparation for the future of work? Or is it something that can be left until at university?

Conrado 19:31

Absolutely. I think that students do have an approach with AI, even if we have it at school or if not, and learning in a secure environment how to use it properly is a way to prevent harm in the future as well. And with regards to the question that you asked Kerem, I also think that it's important to incorporate AI in a young age because AI is going to change and is changing actually maybe many different ways in which jobs are done. So if students do have this approach, younger the better to have a better way to look at things in the future, in the future jobs as well.

Daniel Emmerson 20:14

So taking that into account, Vicky, are there any tips that you might want to give any teachers that are listening to this?

Victoria 20:22

I would personally tell them not to be afraid. I know change can be scary at times, but the faster they adapt, the willingness that they put to adapt will actually have a price on the future of education. I believe like teachers that do not want anything to do with AI in any shape or form of AI, we're not only talking about ChatGPT, but they will eventually be left behind. I believe like education is developing and is advancing like the same education that we had when we were kids. It's not the same that kids nowadays are receiving and I think that's great. And that's where I believe like the greatness of education really lies, that it's changing all the time. Teachers that want to implement the same method that they've been applying like 20 years in a row. I'm sorry, but that's just not the way to teach nowadays.

Daniel Emmerson 21:20

Let's stay with you, Vicky, just for a second, because we're planning, as a student council, a way of increasing student voice in this conversation, and we're looking at developing a conference around some of these subjects that we've been talking about just now. Are you able to tell us a little bit about your ideas for this and why you think it's important?

Victoria 21:42

Well, my ideas for the conference, I believe, are exciting. I hope listeners share the same perspective. I would love to indulge in the psychological part of AI, how it can impact students regarding isolating terms that I talked about previously. And also a question that I believe triggers a lot of teachers, like can AI replace teachers? I want to debunk that theory from a psychological perspective that I've learned in university and how it can never really replace the human mind and how amazing the human mind is, and how we can rather look at AI as a tool and not as harmful.

Daniel Emmerson 22:27

Let's maybe take that question. Kerem, what do you think? Could AI replace teachers?

Kerem 22:32

Oh, absolutely not. I don't think that is ever possible, at least with the current level of AI. I don't think that's possible because being a teacher is much more than just teaching stuff, especially in younger ages. Teachers are there for children to get to know the earth around them, to form connections with nature and with their peers. So they basically teach people how to socialise, and I don't think any artificial object is capable of doing so. They might teach them how to add, subtract, multiply, divide, those kind of stuff, but they can't teach a child how to share their toys when needed. So in that sense, teachers will always be a part of that classroom environment.

Daniel Emmerson 23:23

Conra, same question to you.

Conrado 23:25

Well, certainly not. I mean, as Vicky said previously, teachers will be replaced if they do not adapt to AI tools. I remember, for example, the education that our parents had and the education that we had, and it was really different in terms of, well, the Internet was made during those years and teachers had to in some way be not only the person who gives the absolute truth to the students or the source of learning, but also a companion towards the personal learning of other students, you know. So I think that the teacher's role must tend to or should in the future tend to be like that, but also it is something that we should all be aware as teachers can adapt themselves, but at the same time, it is difficult to adapt in a context in which they cannot, you know, or in a context in which it's really difficult to be adaptive, you know also regarding like the fastest and fastest AI is getting. But it is something that should not be thought of teachers only adapting but also working together towards the future that comes.

Daniel Emmerson 24:42

Feli, let's finish with you the same question. However, if you're of the view, as with everybody else here, that AI will not replace teachers, what is the one thing that teachers can do to increase their competency and their confidence when it comes to using AI for teaching and learning?

Felicitas 25:02

I agree with all of the other ones who participated and said that teachers will not be replaced by AI. But I also think it's a great question to think about because if we bring back what Konra said, nowadays we work and we study with the Internet for example, while our parents didn't. But for example, nowadays many students can study or prepare for an international exam with digital platforms where there's not a teacher who is studying, but you're actually interacting with different modules that you're not really having this interaction, this social interaction with a teacher. So I hope AI does not replace teachers, but I hope also that teachers can learn how to make use of this tool because I believe it is a very powerful tool which can foster the process of education and learning. So I would say, as Vicky said, I would embrace teachers not to be afraid of implementing this tool to learn about it and also as well as they do not know how to use it, students don't know as well. So I think that it is a process in which both parts should be open and resilient to failure, but also to success.

Daniel Emmerson 26:22

I can see Vicky wants to comment as well in this last few seconds. Vicky, do go ahead.

Victoria 26:26

Yeah, I just wanted to join Feli with what she was saying. I believe this is so exciting because it invites us to kind of rethink education and think about where does the value of education really lie on. Is education for us only shoving knowledge from students minds or is it the human experience that comes along with it? And I believe it is a possibility for teachers to actually get creative and do not give simple prompts that ChatGPT can solve in a few seconds. Because I believe like that teacher resembles what a machine can do. You know, it gives you a prompt, it's something that is repetitive. She has or he has been doing it for a long time. It hasn't changed or adapted to times nowadays and you can easily solve it, but you, you don't get anything from that experience. You just do it really manually and repetitively and you have a program and it works. But you don't get anything from it. Right now, I've graduated high school, like, two years ago, and I can generally say that what I remember from a high school experience is not really the content, and I'm not really proud of saying that. But I don't. I don't remember what I've seen in my history class, you know, but what I do remember is my interactions with teachers that left a mark in my education journey and with my classmates, you know, the human experience.

Daniel Emmerson 27:48

What a wonderful way to end today's conversation. Our incredible Student Council members, thank you so, so very much for being with us on Foundational Impact. We look forward to hearing from you again very, very soon. 

That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.

About this Episode

Student Council: Students Perspective on AI and the Future of Learning

When generative AI first appeared on the scene, many educators had concerns about how students might misuse this technology. A lot of discussions focused on plagiarism. However, after hearing from the Good Future Foundation Student Council, you might be pleasantly surprised to discover that some students are actually much more critical and reflective on how generative AI can help their learning and influence their social and emotional wellbeing. In this episode, four members of our Student Council, Conrado, Kerem, Felicitas and Victoria, who are between 17 and 20 years old, share their personal experiences and observations about using generative AI, both for themselves and their peers. They also talk about why it’s so crucial for teachers to confront and familiarize themselves with this new technology.

Daniel Emmerson

Executive Director, Good Future Foundation

Conrado

Student Council Member

Felicitas

Student Council Member

Kerem

Student Council Member

Victoria

Student Council Member

Related Episodes

February 17, 2025

Liz Robinson: Leading Through the AI Unknown for Our Students

In this episode, Liz opens up about her path and reflects on her own "conscious incompetence" with AI - that pivotal moment when she understood that if she, as a leader of a forward-thinking trust, feels overwhelmed by AI's implications, many other school leaders must feel the same. Rather than shying away from this challenge, she chose to lean in, launching an exciting new initiative to help school leaders navigate the AI landscape.
February 3, 2025

Nurturing Students into Social Entrepreneurs

In this episode, Hult Prize CEO Lori van Dam pulls back the curtain on the global competition empowering student innovators into social entrepreneurs across 100+ countries. She believes in sustainable models that combine social good with financial viability. Lori also explores how AI is becoming a powerful ally in this space, while stressing that human creativity and cross-cultural collaboration remain at the heart of meaningful innovation.
January 20, 2025

Laura Knight: A Teacher's Journey into AI Education

From decoding languages to decoding the future of education: Laura Knight takes us on her fascinating journey from a linguist to a computer science teacher, then Director of Digital Learning, and now a consultant specialising in digital strategy in education. With two decades of classroom wisdom under her belt, Laura has witnessed firsthand how AI is reshaping education and she’s here to help make sense of it all.
January 6, 2025

Richard Culatta: Navigating AI in Education

Richard Culatta, former Government advisor, speaks about flying planes as an analogy to explain the perils of taking a haphazard approach to AI in education. Using aviation as an illustration, he highlights the most critical tech skills that teachers need today. The CEO of ISTE and ASCD draws a clear parallel: just as planes don't fly by magic, educators must deeply understand AI's capabilities and limitations.
December 16, 2024

AI in Legal Education and Justice

Professor Anselmo Reyes, an international arbitrator and legal expert, discusses the potential of AI in making legal services more accessible to underserved communities. He notes that while AI works well for standardised legal matters, it faces limitations in areas requiring emotional intelligence or complex human judgment. Prof Reyes advocates for teaching law students to use AI critically as an assistive tool, emphasising that human oversight remains essential in legal decision making.
December 2, 2024

AI's Role: From Classrooms to Operating Rooms

Healthcare and technology leader Esen Tümer discusses how AI and emerging trends in technology are transforming medical settings and doctor-patient interactions. She encourages teachers not to shy away from technology, but rather understand how it’s reshaping society and prepare their students for this tech-enabled future.
November 19, 2024

AI Integration Journey of a UK Academy Trust

A forward-thinking educational trust shows what's possible when AI meets strategic implementation. From personalised learning platforms to innovative administrative solutions, Julie Carson, Director of Education at Woodland Academy Trust, reveals how they're enhancing teaching and learning across five primary schools through technology and AI to serve both classroom and operational needs.
November 4, 2024

AI Use Cases in Hong Kong Classrooms

In this conversation, Joseph Lin, an education technology consultant, discusses how some Hong Kong schools are exploring artificial intelligence and their implementation challenges. He emphasises the importance of data ownership, responsible use of AI, and the need for schools to adapt slowly to these technologies. Joseph also shares some successful AI implementation cases and how some of the AI tools may enhance creative learning experiences.
October 21, 2024

Tech, Education, and Sustainability: Rethinking Charitable Approaches

In our latest episode, we speak with Sarah Brook, Founder and CEO of the Sparkle Foundation, currently supporting 20,000 lives in Malawi. Sarah shares how education is evolving in Malawi and the role of AI plays to young people and international NGOs. She also provides a candid look at the challenges facing the charity sector, drawing from her daily work at Sparkle.
October 7, 2024

Assurance and Oversight in the Age of AI

Join Rohan Light, Principal Analyst of Data Governance at Health New Zealand, as he discusses the critical need for accountability, transparency, and clear explanations of system behaviour. Discover the the government's role in regulation, and the crucial importance of strong data privacy practices.
September 23, 2024

Leading Schools in an AI-Infused World

With the rapid pace of technological change, Yom Fox, the high school principal at Georgetown Day School shares her insights on the importance of creating collaborative spaces where students and faculty learn together and teaching digital citizenship.
September 5, 2024

NAIS Perspectives on AI and Professional Development

Join Debra Wilson, President of National Association of Independent Schools (NAIS) as she shares her insights on taking an incremental approach to exploring AI. Discover how to find the best solutions for your school, ensure responsible adoption at every stage, and learn about the ways AI can help tackle teacher burnout.
April 18, 2024

The Keys to a Successful Nonprofit and Preparing Students for AI and New Technologies

Discuss the importance of preparing students for AI and new technologies, the role of the Good Future Foundation in bridging the gap between technology and education, and the potential impact of AI on the future of work.

Suzy Madigan: AI and Civil Society in the Global South

AI’s impact spans globally across sectors, yet attention and voices aren’t equally distributed across impacted communities. This week, the Foundational Impact presents a humanitarian perspective as Daniel Emmerson speaks with Suzy Madigan, Responsible AI Lead at CARE International, to shine a light on those often left out of the AI narrative. The heart of their discussion centers on “AI and the Global South, Exploring the Role of Civil Society in AI Decision-Making”, a recent report that Suzy co-authored with Accentures, a multinational tech company. They discuss how critical challenges including digital infrastructure gaps, data representation, and ethical frameworks, perpetuate existing inequalities. Increasing civil society participation in AI governance has become more important than ever to ensure an inclusive and ethical AI development.