AI in Legal Education and Justice

December 16, 2024

Daniel Emmerson  0:01  

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a nonprofit perspective. My name is Daniel Emmerson, and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world. This podcast series sets out to explore the trials and tribulations of building a nonprofit from the ground up, while also investigating the changing world of technology in life, learning and work.

Today, we are thrilled to welcome Professor Anselmo Reyes to the podcast. Professor Anselmo is an arbitrator with a wealth of experience in international dispute resolution. He was a professor of legal practice at Hong Kong University from 2012 to 2018 and served as a judge of the Hong Kong High Court from 2003 to 2012. Currently he also acts as an international judge at the Singapore International Commercial Court, a position he has held since 2015. Professor Anselmo is the author of several books on international arbitration and dispute resolution. In terms of education, he holds an AB in economics from Harvard as well as a BA LLM and PhD in law from Cambridge University. His unique blend of academic and practical experience in law will surely make for an insightful conversation today. Professor Anselmo, welcome to Foundational Impact.

So first of all, just to thank you again for your time and your involvement. Professor, it's wonderful to have you here. My first question to you is around best practice when it comes to artificial intelligence tools and the legal sector, and what educators might be able to learn from best practice that's currently out there, and how people in the legal sector are currently using these tools.

Professor Reyes 2:15  

I think that's a difficult question to answer, because we're, in a way, just finding out what AI can do and how it can be used to assist in legal development. So a number of judiciaries, I think for instance, of the New Zealand judiciary, the judiciary here in Hong Kong have put out guidelines. Hong Kong has recently put out guidelines on the use of ChatGPT by lawyers. But in terms of legal education, that is how law students should use AI, especially now it's very popular, ChatGPT. I'm not sure what sort of guidelines we can really usefully give at this time, because things are developing so quickly. So I think what I would simply suggest, by way of guidelines is typically the student, or law student using AI should say that the student is using AI. And number two, the student should check whatever the AI throws out and not simply cut and paste it on whatever the student has to produce by way of a legal essay or a legal answer to a particular problem post at school.

Daniel Emmerson  3:29  

Do you think that there's scope for artificial intelligence tools being able to improve how we communicate together, especially in jobs that require lots of personal interaction, or indeed, formal procedures?

Professor Reyes  3:44  

Definitely. So I look at a little bit more deeply in this sense, I don't think we have a choice, whether or not it helps us to communicate more more deeply. We've recently had a Judicial Conference where what has been pointed out is that much of the world doesn't actually have access to legal services, and it's not a choice between not using AI and using AI. If you use AI, at least there is a chance of delivering legal services to outlying communities, communities that haven't really got access to a judge or to a court. No easy access, or it's too expensive. Using AI will lower the cost of delivering judicial or other legal services, and will at least make access to justice a reality for a large proportion of the population in the world today. So we don't really have much choice. People say, oh, we must be careful about using AI, because it can do all sorts of things. Yes, that's true, and we must take precautions. But we don't really have a choice, apart from engaging and in point of view of access to justice, and that means that our students have to familiarise themselves with AI, whatever problems there may be in using AI as a legal instrument.

Daniel Emmerson  5:13  

Are there any good examples, Professor, at this point as to how an AI tool might be able to provide or support that level of justice in these communities you're speaking about? I'm thinking about this from the sort of ethical best practice of AI tools and what they're capable of at this point.

Professor Reyes 5:33  

For many people, what they're looking for is some sort of decision, some sort of decision so they can get on with their lives, rather than having a question, a legal question, or a problem hanging over them. What AI suggests might be possible is to have some sort of, let's call it a machine that if you plug in certain information, the facts of a case will relatively quickly, relatively inexpensively, or significantly inexpensively, give some sort of answer to a dispute one way or the other. Now for many people, that may be enough. For others, you may want some human intervention down the road, some appeal, but that will cost. What the challenge is to design AI decision makers that will give fair, reasonably fair decisions. We hear all about biases being built into AI. It all depends upon the big data put in and all right, we're aware of the possibility of bias. Often, the problem of bias, particular type of bias, doesn't arise or doesn't become evident, until after the fact, we discover that there is a trend towards AI coming up with certain decisions all the time, and then looking at it, we then realise that the data used to program the AI is skewed in a particular way. So the challenge is to come up with some sort of AI that will at least be trustworthy, or ordinary people who, in the absence of AI will just not be able to afford court, will just not be able to afford legal services.

Daniel Emmerson  7:23  

Do you think at this point that level of trust that people might have in bureaucratic systems and governance can be transferred or outsourced to a machine in this way?

Professor Reyes 7:40  

Unfortunately, we're not at that point yet, and I think that's the point that we should be working towards. Let's take an example. Singapore recently has been, I think, starting to create a database of its small claims tribunal decisions with a view towards creating some sort of AI that will be able to give decisions or assist in coming to decisions in the sort of claims that are not necessarily big from overall perspective, but for the ordinary citizen, would be a major aspect of one's life. Now, in the small claims tribunals in Singapore and Hong Kong, you're not permitted to have a lawyer. You go to the court and the court will assist you. It was, it's more an inquisitorial process in coming to some resolution of your dispute, of the party's dispute. So now the idea is, if you plug all that database of cases that we have in the small claims tribunal and the decisions that have been given or the results the outcomes in those cases, let's say you have 10,000 cases with the resulting AI be able to determine like cases in particular ways, in ways that are in accordance with the law and in ways that are fair and reasonable. That sounds good, and it sounds like a reasonable way of bringing access to justice to those who need in particular and cannot afford. The problem is that the information that you plug in includes, because lawyers are not present in the small claims, includes every single argument. The litigants in the small claims throw in all sorts of arguments, emotional arguments, legal arguments, whatever they think will help. How is the AI going to sort through such big data to come up with something that resembles the law. In other words, how do you filter out the non relevant elements so that the AI is something more like a human judge, or at least engender some degree of trust, if you just plug in all the raw big data that we have in small claims, who knows what patterns the AI will discern, and who knows what the AI will be coming up with by way of a decision, if you plug in similar or particular facts that are analogous to something that has been cited before, there are a lot of problems that we have to work out in how we plug in the data and how we filter the data so that irrelevant matters don't go on to train the AI in the wrong direction.

Daniel Emmerson  10:28  

But there's a lot of emphasis, I think, being put on the human involvement in particularly high profile cases and high profile decisions, if we're looking at cases of restorative justice, for instance, there's a feeling, right, that this is being considered and carefully thought through from an emotional perspective, as well as just looking at hard data and facts and statistics. Is that something that you think people will be able to buy into, even if they don't have access to a robust justice system in their community.

Professor Reyes  11:06  

I think at the moment, the perception is that you need the human touch. There're certain things, certain types of decision that perhaps it cannot be countenanced at the moment that machines will decide, let's say family law cases, cases having to do with custody and so on. So I don't think across the board, we can introduce AI right away in all sorts of cases. What we can try to do is use AI in those cases that are perhaps more Euclidean, more geometric or more axiomatic in the way that they are decided. I'm thinking here of run of the mill, commercial cases, run of the mill, sale of goods cases, run of the mill, contract cases, cases which do not so much involve the personal element of a family, personal element of a child. Cases that are between, let us say, two businesses, that may be more susceptible, at least in the first instance, to having AI adjudicators. Shall we call them that, at first instance, with the possibility of appeal to a human panel, although once you get to that level, then you may have to start paying, it may become more costly. But if you start at a basic level, on more Euclidian, what I've been calling Euclidian, or axiomatic areas of law, then that might, might there's some caveats. There might be a possibility of human beings becoming used to a machine, at least in first instance, or being able to accept something that is not quite the human touch, at least in first instance, I said that there are some caveats. Very often in real life, the law is not in dispute. Everyone knows what the law is, and everyone accepts it. What is in dispute are matters of fact, and let's give a very typical example, even in a commercial or basic contract case, someone says we agreed orally that this would be the case, and the other side says no, we did not agree. One says black, the other says white. Well, you can pump in all the big data that you want, but how's the AI going to tell whether to go for the one who is saying there is an oral agreement, and the one that's saying there is no oral agreement, we never agreed over the telephone, that that sort of case may not be so susceptible. The sort of case that may be susceptible is one where the dispute is over law, but the reality is, at all levels of society, whether the big, prominent cases or the ordinary cases that one faces in daily life, very often it's a question of one person's word against the other.

Daniel Emmerson  13:56  

Say, we're in a situation where that has become a norm where AI tools are able to make decisions, those axiomatic decisions. Is this, or would you consider it to be a risk, for example, for governments to be using this technology in other areas of bureaucratic decision making, when it comes to, for example, benefit claims or housing allocation or decisions that have a real world impact on people.

Professor Reyes  14:26  

I think it may be, there are risks involved, and again, what we have to do is to try to be aware, become aware of those risks, and how to deal with or mitigate those risks. One is always worried about. Well, take even now forget about AI, take about these forms that you have to fill out. I often fill out forms in all sorts of applications, and I frequently find that my case is not covered by the form, and so I have great difficulty filling in the blanks and ticking in the boxes to describe my case. It's all supposed to help facilitate me to get whatever benefit or relief I'm looking for, but I just can't fit in the blanks, or can't fit in the boxes, because my case doesn't quite so easily fit in there, that I think is the risk with a lot of AI being used in the bureaucratic manner, the tyranny of AI, you've got to fill in the blanks the form, or fill in the questions that the AI puts to you in a particular way, within a particular template. And for most of us, our situations may not be in the template. How exactly to deal with that. That's, that's, that's the problem, I think.

Daniel Emmerson  15:46  

Moving to ways that this technology might either pose additional risk or indeed benefit in between, ways that countries might work more collaboratively with each other on legal issues. What are your thoughts around this space? Do you see those risks outweighing potential benefits? Or is there a possibility that it can improve interstate collaboration?

Professor Reyes  16:13  

There certainly is a possibility for improving things, but there are risks. And the challenge is, as always, how to, how to not, not not too much balancing the risk, but how to mitigate the ill effects, or to minimize the ill effects, such that, as we said before, everyone has confidence. Nations, governments have confidence in using the AI, the machine, to achieve particular objectives, that's, I think, the problem. Now some people may say, well, to do what you've been suggesting that may be dangerous, because that assumes a sort of universality of values, universality of cultures, universality of norms, which may not be the case, so that the more data you plug in on an international basis, the more, how do you say, insensitive the AI becomes to cultural values, to particular norms of a given community. Again, no one says that this, this exercise is easy. You have to decide what the trade off will be. Think at the end of the day, it's a matter of trade offs. There are definitely advantages, there are risks, and where is society going to pitch itself as this is we're prepared to accept this trade off?

Daniel Emmerson  17:39  

And who in that scenario might be inputting the data?

Professor Reyes  17:43  

Well, I think in a capitalist society, different service providers would be inputting the data and creating their AI, and they will be extolling the virtues of what they create. And then the market, presumably, is supposed to decide. The market will decide this is good for international purposes. This is good for regional purposes. There may be more than one AI, there may be more than one AI machine for particular purposes or for particular objectives. At the end of the day, at least in capitalist society, it will be the market that decides, even if you have, say, a socialist or more autocratic government. It's, I'm not sure exactly how the autocratic government will decide, unless it simply says, well, this is more conducive to our norms the way we see it, autocratically, and so we will use this, but probably the ideal, that which will lead to constant development, refinement and mitigation of the risks would be some sort of market that will then choose the best AI in particular situations.

Daniel Emmerson  18:53  

When it comes to best practice, once again, thinking about ways that university, students and students at school might be exposed to this technology. What are the best practices do you think around that when taking into account all of the risk mitigation that we've talked through so far?

Professor Reyes 19:14  

Well, at the moment, I think, one, there's no point in forbidding, prohibiting students from using it. Initially, for instance, the University of Hong Kong had a policy of prohibiting law students from using ChatGPT. That's not, I don't think that's such a great idea, because inevitably, students will be using it, lawyers will be using it. They're using it now. Judges will be using it, arbitrators will be using it. So we have to learn how to use it and how to come terms with it fine. The second is, I think, to be aware of the biases that are implicit. That is, there is the possibility of bias. So what that means is we've got to check, we've got to check the result. Constantly ask ourselves, does the result make sense? Does the result display a certain sophistication so that it actually grapples with the issue? I will explain what I mean by that in a moment. There is a tendency to just accept what the machine says if one's lazy, because it's said by the machine, machine can't, can't do wrong. We must get over that, and we must be critical and critically examined. Now, what do I mean by critically examined? What do I mean by checking? Well, I once as, by way of experiment, I once asked the AI, ChatGPT to write some, not the entire speech, but the topics. Identify what topics I could put out on the use of AI in the law. And it generated a list, an interesting list, but when you look at it more closely, very superficial, not really very deep, not really very reflective. It's just a mish mash of things that you can find, let's say, on the web or wherever and whatever, big data has been plugged into it. So if the student were to, the law student, is to be satisfied with that, you're not going to get a very good grade. You're not going to get very far in law school. Similarly, if a lawyer is going to be satisfied with that as an answer to a legal problem, well, I'm not sure that the lawyer is going to be doing his or her job that well. So one must always be critical about what is put out by the output, by the AI, one always needs to ask, well, is that? Is that? Is that good enough? Has that said anything that I don't already know? We've done some experiments by using, my assistant and I had run a workshop on cross examination. So we've done some experiments. We put the entire problem, plugged it into the AI, and asked the AI to generate cross examination. So as a guide to students taking the workshop, here's what the AI suggests. Yeah, well, the cross examination, not bad, but you need to prompt it several times in order to get it to refine it. And what I found is, it takes your prompts literally. It comes out first with a few questions. Fine. They're not in an acceptable form for putting as a lawyer or as a law student in a mock cross examination. So you say, well, you prompt it, please put everything in a tag question form, and it does that literally, every question becomes a tag question. So if the student's going to have this type of cross examination, there's no room for advocacy, there's no room for variety. It becomes just, that's the case. Is it not? You did this? Did you not everything becomes a sort of monoton? You're not really training the student. You're just getting the student to kick off questions by way of cross examination. You also have to look more deeply into the question, because the AI has no idea what cross examination is or what its purpose is. It just does what it's programmed or asked to do. And some of the questions, a lot of the questions that were being put forward in cross examination, were actually questions that favored the side whose witnesses in the workshop problem you were cross examining. So it would be the reverse of what you're supposed to do, but you, you won't, if you just blindly accept what the AI pumps out that's that's not going to help. Now we did the same problem from perspective of mediation. We asked the AI, well, for our mediation workshop, please generate to guide the students some mediation initiatives, some possibilities that a mediator might use to try to bring a settle, resolve a dispute between two parties, and the AI promptly, within a few seconds, generated some lines of inquiry for the mediator. They were all rather bland, all superficial, something that anyone could have written on the back of an envelope within five minutes. So not particularly helpful. So that's what I mean. He mustn't be blinded by science necessarily, and we must critically approach whatever the AI is putting out. The AI now can be of help, but more needs to be done, and certainly on what we have now. We have to be very critical about what is put out. We have to examine it. We may refine it, but I think we're not yet at the stage where we can do away with a human being completely. And that's where our law students, I think, would find it beneficial to use AI as an aid to the work, but also to generate thinking on how it might be improved for the future.

Daniel Emmerson  25:08  

Just one follow up question to that, if I may, Professor, the instinct of many educational institutions around the world has been to ban before exploring. I think just because of the perceived threats that artificial intelligence tools pose on traditional forms of education, what would you say to an institution that a school or a university that is currently got a ban policy in place, but is looking to perhaps come out of that or reconsider?

Professor Reyes 25:43  

Well, I think Hong Kong, you law faculty has reconsidered, and now it's it's in play. You can use ChatGPT, but you have to make a digital declaration to that effect. I think for those who think that the ban would be effective, I think you have to think more or less like King Canute, trying to stop the waves. You just can't do it. You just can't command the waves to stop. The waves won't stop. The waves will keep coming.

Daniel Emmerson  26:08  

Professor, thank you so very much indeed for your time today, it's been wonderful speaking to you about this. We really, really appreciate having you on Foundational Impact.

Professor Anselmo 26:16  

Thank you very much for inviting me.

Voiceover 26:19  

That's it for this episode. Don't forget the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here, and we'll see you next time.

About this Episode

AI in Legal Education and Justice

Professor Anselmo Reyes, an international arbitrator and legal expert, discusses the potential of AI in making legal services more accessible to underserved communities. He notes that while AI works well for standardised legal matters, it faces limitations in areas requiring emotional intelligence or complex human judgment. Prof Reyes advocates for teaching law students to use AI critically as an assistive tool, emphasising that human oversight remains essential in legal decision making.

Prof Anselmo Reyes

International Arbitrator and Legal Expert

Daniel Emmerson

Executive Director, Good Future Foundation

Related Episodes

January 6, 2025

Navigating AI in Education: Insights from Richard Culatta

Richard Culatta, former Government advisor, speaks about flying planes as an analogy to explain the perils of taking a haphazard approach to AI in education. Using aviation as an illustration, he highlights the most critical tech skills that teachers need today. The CEO of ISTE and ASCD draws a clear parallel: just as planes don't fly by magic, educators must deeply understand AI's capabilities and limitations.
December 2, 2024

AI's Role: From Classrooms to Operating Rooms

Healthcare and technology leader Esen Tümer discusses how AI and emerging trends in technology are transforming medical settings and doctor-patient interactions. She encourages teachers not to shy away from technology, but rather understand how it’s reshaping society and prepare their students for this tech-enabled future.
November 19, 2024

AI Integration Journey of a UK Academy Trust

A forward-thinking educational trust shows what's possible when AI meets strategic implementation. From personalised learning platforms to innovative administrative solutions, Julie Carson, Director of Education at Woodland Academy Trust, reveals how they're enhancing teaching and learning across five primary schools through technology and AI to serve both classroom and operational needs.
November 4, 2024

AI Use Cases in Hong Kong Classrooms

In this conversation, Joseph Lin, an education technology consultant, discusses how some Hong Kong schools are exploring artificial intelligence and their implementation challenges. He emphasises the importance of data ownership, responsible use of AI, and the need for schools to adapt slowly to these technologies. Joseph also shares some successful AI implementation cases and how some of the AI tools may enhance creative learning experiences.
October 21, 2024

Tech, Education, and Sustainability: Rethinking Charitable Approaches

In our latest episode, we speak with Sarah Brook, Founder and CEO of the Sparkle Foundation, currently supporting 20,000 lives in Malawi. Sarah shares how education is evolving in Malawi and the role of AI plays to young people and international NGOs. She also provides a candid look at the challenges facing the charity sector, drawing from her daily work at Sparkle.
October 7, 2024

Assurance and Oversight in the Age of AI

Join Rohan Light, Principal Analyst of Data Governance at Health New Zealand, as he discusses the critical need for accountability, transparency, and clear explanations of system behaviour. Discover the the government's role in regulation, and the crucial importance of strong data privacy practices.
September 23, 2024

Leading Schools in an AI-Infused World

With the rapid pace of technological change, Yom Fox, the high school principal at Georgetown Day School shares her insights on the importance of creating collaborative spaces where students and faculty learn together and teaching digital citizenship.
September 5, 2024

NAIS Perspectives on AI and Professional Development

Join Debra Wilson, President of National Association of Independent Schools (NAIS) as she shares her insights on taking an incremental approach to exploring AI. Discover how to find the best solutions for your school, ensure responsible adoption at every stage, and learn about the ways AI can help tackle teacher burnout.
April 18, 2024

The Keys to a Successful Nonprofit and Preparing Students for AI and New Technologies

Discuss the importance of preparing students for AI and new technologies, the role of the Good Future Foundation in bridging the gap between technology and education, and the potential impact of AI on the future of work.