Transcription
Daniel Emmerson 0:02
Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non-profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a non-profit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world. This podcast series sets out to explore the trials and tribulations of building a nonprofit from the ground up, while also investigating the changing world of technology in life, learning and work.
Joseph Lin is an educational technology consultant based in Hong Kong who also designs STEAM and entrepreneurship curricula for schools and companies. Joseph has been tinkering with generative AI models since long before ChatGPT, and in the past year he has led dozens of AI pilot programs for students from a variety of backgrounds. Not only that, but Joseph sits on our advisory council at Good Future foundation where he helps steer the ship, particularly as far as AI tools and tech are concerned.
Joseph, it's wonderful to have you with us. Thanks ever so much for your time and for being here. There's so much that I want to ask you about your work and your expertise with artificial intelligence in education. Let's kick off with a bit of background because you have quite a fascinating background. Actually, when it comes to a lot of the guests that we've had on the podcast series so far, there are very strong links with education and technology. Yours spans, literature, business, and I think, performing arts. Can you tell us a little bit about how this background has led you to where you are today and perhaps most importantly, how that helps with the work you're doing around AI in schools?
Joseph Lin 1:57
Right. So for the longest time, I just when I was a teenager in high school, all I thought was, okay, I'm going to be a professor and I'm going to teach English lit and that's it. Until I actually graduated. And then I taught at the Chinese university, English department for a while. And I thought, you know what, universities move kind of slowly and then there are things I want to try. I want to pivot very quickly and sometimes within a very large organisation it's very difficult to move quickly. So after that I thought, all right, you know what, maybe private education is the way to go, because that is where you can move very quickly, where you can have the same course week to week and then you can iterate every time. These are all things that you cannot do within a more hierarchical organisation. So I pivoted out to do private education and I started with language arts, with performing arts, and with entrepreneurship, with Design thinking, all those things. And then when the AI wave came out, initially I was just having a discussion with a friend who teaches at the University of Science and Technology in Hong Kong. And then he is an engineering professor and he was saying, all of a sudden all our students' essays got really, really good. Like all the grammatical errors have disappeared. This is the December of 2022. Like ChatGPT had just launched. And then of course, everybody knew what was going on, right? And then so they thought, okay, oh no, this is going to be trouble. And then, and this is still 3.5. And we could already see the impact where a lot of the great boundaries that you had to use to segregate different students suddenly became invalid. And then there are like, within one semester, that department decided to abolish all essay tasks and it's all presentations from now on. And then so that was about it. I was very lucky. It was about a week since ChatGPT launched. And then someone was talking to me about this. And then that got me interested is like, hey, what is this? And so I just dove into it and then tried to learn as much about this as I could. And it helps that I would consider myself quite techie to begin with.
Daniel Emmerson 4:18
So how did you, how did you do that? How did you like, dive into it? Because that's a big thing for a lot of people, right?
Joseph Lin 4:24
Honestly, the information about AI is all over the place. I wake up, I scroll tech Twitter to see what I had missed. And then I also YouTube videos, I follow the big labs. And then actually the big tech companies are being very transparent with what they're doing, what they're aiming to do. And I think this makes sense for them because they want to first promote their wares and also to assure people that this is what's coming next and then we have to be ready for it. And so you keep up with the official source, the unofficial source, keep up with the commentary, and then just keep playing with it. It is like the one thing that I feel slightly sorry about is that AI currently is still an expensive thing to play with.
Daniel Emmerson 5:12
Sure.
Joseph Lin 5:12
I'm fortunate enough that my company is happy to subsidise a lot of their subscription costs, but then currently we are paying like US$700-1,000 per month just in AI subscriptions. And that is what enables us to try alot of these new things, be among the first, and so that we can think of pedagogical use cases and basically be ready when people start talking about it on a wider level.
Daniel Emmerson 5:40
And what does that look like? How do you choose and how do you filter which subscriptions you're going to go for?
Joseph Lin 5:45
So this is actually a big pain point in using AI in education. Obviously a lot of these services are catered towards individual team based usage. And then it doesn't really work if you try to give every student access to the same AI. So for example, if you want to do voice generation, perhaps you want to teach them to do like a drama task. And then sometimes students don't know how to pronounce the words. So why not put the script that you have written into the AI, The AI will read it out for you and then you can rehearse like that. But then you can't pay for everybody's own subscription. So what we do is we kind of, we have a few subscriptions. Like we subscribe to the basic tier, the cheapest tier we can get away with, and we have four or five of them and then we let each group share an account. Like there are a lot of these little hacks that you have to go into to make sure that students actually use it. And actually for now, I would still say access is a major logistical pain point in using AI reliably in the classroom. There are students, there're teachers I talk to who said they really wanted to try letting kids use it, especially at the higher forms. But then by the time you figure out whether the schools have blocked the service, whether the students can sign in with Google, whether there are rate limits to worry about, like, right, in terms of classroom management, that's half the class gone and you, so you actually can't do it.
Daniel Emmerson 7:13
So what's the best way to maybe address that as a school? You know that the technology is out there. So you've taken this like deep dive into exploring what's possible with regards to AI tools for teaching and learning. You found one or two ideas that you want to play with and you know that they might fit the needs of some of your students.But you can't scale it because of the levels of subscription. Or there are issues around data privacy or safeguarding. Like how are schools overcoming these challenges or are they not overcoming them? What's the situation here?
Joseph Lin 7:47
Among the schools I've talked to, a lot of them are just waiting for someone else to take the first step so that they can say, you know what, you know, they did it, it worked well. I'm just following in their footsteps. And then if for schools who are really jumping in, I have seen schools where the principal just blanket subsidise AI subscriptions for the teachers. And then where, if you want to play with AI, I will give you X amount of money per year to spend as you see fit. Just at the end make sure you come up with some pedagogical usage. So that's based on an honour system. Some schools do that and then some other, if you are lucky enough to have a member of staff who is techie enough, perhaps from a computer science background, then there are actually quite very well polished open source software packages you can use. LibreChat is the system we use ourselves. It's very well supported, lots of discussion, lots of documentation you can use and that allows you to set up your own internal ChatGPT, so to speak, where each student, each teacher can have their account. And then the most important part is you have assurance that you know what happens to the data. Because with ChatGPT, unless you manually toggle right, I don't want to contribute my data or with any of the other platforms. I mean the platform companies are getting your school's data and then when we deal with student information, that is not okay. And sometimes even if they say they won't check your data, but then where is it stored? Right? How is it encrypted? There are all sorts of things that a tech teacher must think about and so the easiest way is just to run it yourself. And fortunately, fortunately people have made it so easy. And I think to start with ChatGPT, all you need is to download a fixed package. There are scripts that you can run one click. And then there are the, there's the API key that you just get from OpenAI. With those two ingredients you can run your school's own ChatGPT system and then you can open as many accounts as you like and then you can let them do it, rate limited as much as you want.
Daniel Emmerson 10:04
And is this an investment that's worth making when it comes to time and money at the moment from your perspective?
Joseph Lin 10:09
Obviously whether an investment is worth it depends on each school, right? But I think many schools are waiting for a platform where they can control what happens to the data and then where you can provide assurance that the data is managed in a way that is compliant with their local laws and regulations. I mean ChatGPT, they do have an education product, but does it align with for example Hong Kong data privacy laws and where are the documentation on that? And because on an institutional, on a school level, I need that before I can roll anything out. And so either platforms need to invest in the due diligence and convince schools that their platform is suitable or schools got to, you have to own it. And currently I don't see a way. But the good thing about owning the data is there are much more exciting things you can do with it. For example, let's say everybody is just using their own AI platform. Teacher A prefers Claude, Teacher B prefers ChatGPT, Teacher 3 uses something else. The materials will still get made, but then you're missing the insight you get when everybody's chats are aggregated together. For example, a good friend of mine teaches psychology at a community college and then he created this AI chatbot that is trained on his textbooks data as well as the school schedule, administration, everything else. And so he just gave it to his students, taught them how to use it, and then made sure there are safeguards in place. For example, this chatbot would refuse to write students essays for them. That is very important. But then it is able to provide answers in multiple languages, it is able to tell people when the tests will be and then the most valuable part is the teacher can check what have my students been asking the AI. So you just export all that chat history, you upload it to ChatGPT again and then you get a very high definition, very like in depth look of what questions are students struggling with and then whether there are any things that you need to skip through or repeat and in class. And that is the kind of stuff that you can't do unless you own the data. So that's why I think it's like if at all possible, schools should try to run their own system and own their data.
Daniel Emmerson 12:41
Are there any other key things that schools and teachers might need to think about when it comes to responsible use of AI tools?
Joseph Lin 12:49
Responsible use, this is the very difficult thing because in a typical world you would try, try something yourself and then you would look for literature and see how other people have used it and then you would roll it up. But right now the literature just doesn't exist, right? So I have seen some students using AI and then they become incredibly empowered by it and then they are critical, they have standards, they are not going to just use, they are not just going to just accept the AI generated output. They will for sure iterate. And then there are some other students who of course cheat, but that there are also some in between which I did not expect. Where there are some students who now that they understand the basics of AI, they know what data it was trained on, they know all the information it has access to, and then now they become much less willing to listen to the teachers because they think, well, I have the AI now, I don't need teachers anymore, right? So they're all like, you just don't know. Therefore, in the short term, when I talk to schools, what makes sense seems to be we are still in the early days where it makes sense to keep restricting AI use to students as much as you can. But you start by giving every teacher access to AI. You got to spend at least a year, perhaps, if I think two semesters make sense for the entire staff to play with it, to get familiar with it, to understand what it can do and cannot do. And you really need that comfort level before you can think about how to roll it out to students. And the worst thing that can happen is you let the genie out of the back and then the students have access, they know what it can do. And then the curriculum, the assessment has not caught up yet. And that is very difficult to reverse. And what does that look like? For example, at a primary school level, parents are very eager to use AI because sometimes they have to write all sorts of things for their kids. And then it's just a massive help in managing the communication between the school and the family. But then that is a very small jump from the parents using AI to write a letter to the school, to the parents using the AI to perhaps touch up a student's homework before they submit it. Because of course the parents want their kid to do well and then to get a higher grade. But that is the floodgate. There will be a time where it is 10pm, your kids haven't showered, and then it's really late, and then school starts 7am and then it is really tempting to just use AI to do the homework for them. But the moment the kid realises that's possible, oh no, like, that's, like that will be trouble. So I think just leave ops of students as much as you can, is what I would say to teachers and make sure you know what it is about first before you think about how to use it in a classroom. And this goes counter to lots of people's discussions because it's like, oh my god, we got to use AI to address inequality, to address all the learning gaps. But the problem is we are now trying to address all these gaps within a world where the curricula, the adaptation has not happened yet. We are still expecting students to sit down in a school hall, pen and paper and spend three hours writing an essay. Like that is their ultimate assessment, right? We should be bounded by assessment to make sure they succeed in the thing that they will actually be asked to do because for sure they will be smarter, for sure they will be more creative, for sure they will be more organised if we let them use AI in their school projects. But that could also cause enfeeblement. And that is something that we don't know whether it will help them or harm them yet. The literature isn't there.
Daniel Emmerson 16:49
That's something that's going to take a lot of time to understand and unpack. Right? We need to, we need to monitor that over a number of years, right, in order to see what the impact will be.
Joseph Lin 17:00
Absolutely. But that is more reason to own the data. Right. That's more reason to have the visibility. And I mean education, the world of education is. We are used to moving slow, right, usually people say it kind of negatively, but in this case I think it's actually okay. I mean, the education industry is huge. We are talking about humongous institutions with rules and regulations and norms that have been, like established over decades. And it is insane to think that we can rewrite it when ChatGPT has, you know, it just came out two, three years ago. There's just no way anybody can come up with a holistic plan reaction to it. I mean, if we look at the legislation that is happening even on the UN side, what they are trying to do right now in 2024 is just to define AI, is to understand, to agree on a list of what are the opportunities and risks. And sometimes in high stakes organisations, in large organisations, this is what you got to do and it's normal. And we will have to wait a year perhaps, but that's okay. I mean, students will still be there, teachers will still be there, schools will still be there, it's fine.
Daniel Emmerson 18:16
But at the same time, you've got all of these tech companies telling you that AI is the solution and that in order to best prepare your students for the world of tomorrow, they need to have access to, to this technology. So certainly as a head of school or a senior leader at a school, you want to be trying to explore and experiment wherever possible. But you're saying, no, wait, slow down, have a think about the pedagogical implications of this as well as the risk factors, and then make slow and steady decisions. Is that, is that right?
Joseph Lin 18:47
Yeah. I mean, I'll be the first to admit I am fully, I have fully succumbed to the fomo. Right. I am desperate to try the newest and latest models every time there's any big release. But then once you apply it to a school context, there are all these last mile problems that really deserve to be figured out first. For equity reasons, for logistics reasons, for money reasons, for pedagogical reasons before you really try to roll it out. I mean, of course, like an immediate first step of just paying for people's subscriptions, making sure people have the tool available to them. I mean, that seems entirely reasonable to me. And then you are collecting valuable data. I mean, even if you pay for ChatGPT subscription for your whole school and then nobody uses it, that alone is useful information, right? It means you need more training, right? It means people are not, perhaps they are overburdened elsewhere that they cannot use AI properly.
Daniel Emmerson 19:48
They're also perhaps a little bit, you know, thinking about GPT in particular. It's watching that happen. So watching someone, for example, stand at the front of the lecture hall and give demonstrations on how GPT works, text generation doesn't quite cut it anymore in terms of grabbing people's attention and understanding fully what it can do for them. It's when you see something genuinely new and exciting and profound, which I think GPT is. But folks get used to it so, so quickly. We're always on the hunt, I think for the shiniest and the fastest and most exciting form of this technology. And when we see it in action, it's hard not to think, oh my gosh, I need this in my school. I'm thinking, for example of NotebookLM, which I tried, right, amazing for the first time very recently and thinking about what something like that can do. It can create a life sounding podcast episode of your notes or your concerns or your lack of understanding on a specific textbook and make it engaging and insightful. You know, what teacher isn't going to want to work with that and find ways to re-engage their students? It's pretty mind blowing.
Joseph Lin 21:17
But isn't that exactly the point? You ask what teachers wouldn't want it. But the fact is that there will be many teachers who think I've been teaching for a long time, I'm doing quite well, my students are happy, they're handing in their homework, they're going to exams, I don't need to bother with all this stuff. It is exactly because a lot of people like us who have been working in the edtech space, we are within a little tech echo chamber ourselves where people actually keep, I mean NotebookLM, as you just mentioned, it's not even in beta yet. It is experimental, right? And then like the number of people who are actually using it, leveraging it, thinking about how to use it best, I would say is still very small percentage of the total population. And you can think of it as a fairness question or you can think of it as like a quality control question where for example, teacher A is very interested in AI and they use AI to adapt learning materials, to personalise learning content, to create short quizzes, to create more detailed feedback and do all the things that they are supposed to be doing. And then teacher B, one class over, they think they're doing okay. Right. Or for whatever reason they haven't caught up with the technology yet. And also very fairly, they think this is a work tool. Right. I should not be using my personal funds to pay for the work tool. Well, you know, when the school wants me to use it, they will pay for it and that's when I'm going to dive into it. And so class A and class B, through no fault of their own, will receive dramatically different learning experiences simply because one teacher thought, you know what, this is fun, I'll play with it. Right? So I don't know what people will think about this. I am of the mind that even if out of five classes, if only one class can get the uplift, I will always take the uplift. Right? Because why should one class not be able to benefit just to drag them down to all the other classes levels? But then perhaps in another context people would want consistency because that is a fairness question. So I really don't think it is that urgent to dive into it. Within schools we are now we should think of this as the fact finding stage. And then this fact finding stage won't be complete until perhaps at least half the school's teachers have some experience with using AI in their daily lives and within really making things that they do on a day to day basis. After that we can think about implementation.
Daniel Emmerson 24:01
Taking time for making decisions around what you might use and how you're going to use it seems to be your recommended approach. Teachers need to spend a long time thinking about what they're going to implement, how they might implement it and what the impact is going to be on their staff population and on their students moving forward. Have you got any good examples of a school that has implemented a good policy around use of artificial intelligence tools either in primary or in secondary?
Joseph Lin 24:33
I can't name names, but I can share what I've talked about with other teachers.
Daniel Emmerson 24:37
Sure.
Joseph Lin 24:38
I'll start with a perfect use case. A perfect use case is in a secondary school. This is a sixth form school leavers. They are, they like they are razor focused on their exam they know that they have to deliver and then everyone is self motivated because of all the academic pressure. And then this is a good school, there's a good learning atmosphere and so that the teacher feels comfortable allowing phone use inside the classroom. So what happens is everybody has their phone opened to the AI app of choice and then every time the teacher talks about a new concept and then they've forgotten the definition, they would just quietly Google it, they're just sorry, quietly AI it, Ask a line, large language model, sorry. And then, and then it'll catch them up. And then it has basically solved the issue where the student misses one concept in the chain of thought and then, and then fails to follow everything else. So it's, it's very simple, but it reduces classroom disruption, it enhances recall because, I mean, and not to mention there's the whole chat history there as well. It's. And I find that a very good use case of fitting into existing curriculum with very little adaptation needed. And then on the other hand, primary schools, I have seen more problems where especially with current generation primary students, they are more used to iPads than typing on keyboards. So, so they are using one finger typing, which is very slow and sometimes their first language is not English. And then they cannot type in their native language, e.g. chinese. Chinese typing is not easy for a lot of people. So that really restricts their ability to use AI. And the way to get around that is you need to let them to use voice input to talk to the AI. But then you can't have 30 people talking to the AI at the same time. And so that's one case. And not to mention these are primary students, you probably don't want them talking to the large language model all the time anyway. So in primary school, basically don't let them use the unrestricted large language model. You can give them a well curtailed, safeguarded ring fenced model where this is an experiment I did with the Hong Kong Academy for Gifted Education where we give them normal worksheets. Right. They plan their whole creative writing short story and then they put it into the AI. The AI will develop it and then the students also provide their writing sample to ask the AI to continue writing in their language. And therefore you end up with a short story that is in their own style and tone, roughly based on their own desired plot. And then afterwards you take that short story, you create little storyboards, and then you AI generate pictures based on that. And then there you have a graphic novel, right? And then on top of that, if you want to be really fancy, then you can also put the text into a text to speech engine and suddenly you have a talking book that's also a graphic novel. And then all of this is very accessible technology, I think a school, a teacher can get away with. Maybe you will need one like a slide making subscription. I use Canva. And then you need large language model. We used our own platform. And then you will need a speech to text platform. We used ElevenLabs, which works very well because the fun thing you can do with it is instead of text to speech, you can do speech to speech. A major complaint with text to speech technology in language instruction is if they can just paste in the text and then they get the pronunciation, then the students aren't speaking, they are not practising. So if you use speech to speech models, then you can force the students to say, well, you still have to say it out loud. You have to say it with the intonations you want, with the stresses and pauses and everything else that fit what you want this character to say. And then they will change the voice for you and in different accents and pronunciations if you want. So these are the little tweaks, like just the decision between text to speech and speech to speech makes a huge difference in whether you are actually achieving your learning outcomes and whether students still see themselves in the end context, in the end product. And then, okay, so I talked about secondary.
Sorry, one last one. I'll share a university example. This one is like a mixed bag. So university in Hong Kong, they have this, honestly, I think brilliantly designed course. It's exactly the kind of course that we need more of. It's all about collaboration, it's all about problem solving. It's all about being ambitious. And it is students from different departments, as diverse as possible. You are put into a group and then you have to solve a real world problem. And then in this case, their imaginary client was the Hospital Authority in Hong Kong. And then figure out a way to make things better. And they were given ChatGPT to brainstorm. And so they came up with a whole range of very exciting and diverse ideas. And it was going to be great. But then you go to the final class and then nine groups out of ten ended up presenting telemedicine. So like, why? Right, right. What happened to all the other stuff? Why just assume a doctor in the end? And so of course the professor asked and then what the students said was, well, you know, we asked AI, AI says this is the better idea if we want something, you know, safer, more proven to get a better score. And then AI said that, so they just blindly follow the AI. What I, what I believe was happening was the students don't have the confidence to override whatever the AI recommends. Because if you are a typical student, you think, you know, I do, okay, I'm not the best. And then I have all these things going on. This is a new topic. I don't know much, I haven't read the papers yet. And then suddenly you have this. This is GPT4. It's trained on the entire internet's data. It's OpenAI. It's this and that. It's, you know, all these amazing things. Surely this is smarter than me. And so they, they go with it. And then you end up with this homogeneity, which is a problem. And so what I think, what I took from that real world story was you have to very explicitly force them to question the AI. So if not, like, the most rudimentary way I can think of is you make them write a reflection case. You force them to provide documentation on. All right, AI gave me 10 ideas. This is the process through which we narrowed it down to this final one. And it doesn't even have to be complicated. Right. If you want to do this again in a very AI native way, you would ask them to have a meeting and talk about this selection process. And then you just record the whole thing, transcribe it with AI, analyze it with AI and then at least you see that they thought about it instead of, you know, AI said this one would get me a better score. Therefore we went with telemedicine.
Daniel Emmerson 32:13
And I mean, there's, there's an argument therefore, yeah, debate training and things like that as well, providing a counter to everything that you receive. I'm also thinking, Joseph, a little bit around creativity. And there's a lot of conversation about how to be creative with AI. And that's a subject I just want to wrap us up with today. When it comes to inputting a prompt and receiving a response, that tends to be the typical way that certainly students are engaging with this technology. How might you encourage perhaps teachers to think about focusing on creativity when it comes to working with these, these tools and this technology?
Joseph Lin 32:56
I mean, there are the theoretical answers if you want to, you know, get the best of everything. And then there are the practical considerations if you actually want to be able to wrap up your class and time in a perfect world. Right. You would guide them to, to use the AI to do a tree of thought, to do a chain of thought. Right. The AI where the AI would propose this different evaluation criteria to help come up with new ideas, to evaluate existing ideas. And perhaps you would ask the students to first know AI, pen and paper, write down your ideas first and then move it to the AI. And then there are all these different things you can be doing. But then practically, I think inside a classroom, 40 minutes, an hour, the only thing a teacher can really do is force the kids to write it down in pen and paper first and at least start with something. And then you take a picture, the AI will transcribe it, and then it will give you feedback from there. That's one way to deal with it, especially if you're using language, large language models. Another way if the teacher is a little bit techier is you use a custom GPT that you, that you have instructed that you are a brainstorming thing where someone gives you an idea, someone gives you a topic, and then all you do is throw out 10 variations on the same topic and then you provide guiding questions. And then that could be good, assuming the teacher knows how to do that. And I think perhaps another way to preserve creativity or inspire creativity while working with AI is to focus more on image generation. Like to use diffusion models instead of large language models. Because when you do language generation, the temptation is often, oh my god, it's a thousand words. I'm not going to read all that. And the students are just going to. The good ones will read it. Most of them would maybe touch up the first few paragraphs and then that's it. But then when it comes to an image, they tend to be much pickier because if it is their own story and then this image represents them, this is going to be their own book cover, for example, it's got their name on it. Then the incentive is much greater to be picky about it, to really enforce what you want into it. And then with image generation, you can also do image to image generation, which means you ask them to sketch something on a piece of paper first and then you take a picture of that and then you upload that and then so whatever image is generated is based on their sketch. So just making sure students start first and then that's the one way. And then the other way is perhaps to program the AI to act as a brainstormer and nothing else.
Daniel Emmerson 35:49
So much there, Joseph, for school leaders, teachers, and indeed students to take away from this conversation, thank you so very much for being a part of this podcast series and sharing your experience and your knowledge with our audience. As always, it's wonderful speaking with you, Joseph. Thanks ever so much.
Joseph Lin 36:10
Thank you so much. This is so exciting.
Voice Over 36:13
That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.