
Humanism Now
Welcome to Humanism Now, the weekly podcast from Humanise Live. Tune in for the latest news, insightful worldwide guest interviews, and lively discussions on the most pressing questions of our time — all from a naturalistic, empathetic, and rationalist world view that marks out humanism. Join us as we explore ethical dilemmas, dissect current events, and engage in thoughtful conversations that matter.
Humanism Now
28. Susie Alegre on the Algorithmic Assault on Human Rights: How AI Threatens Our Core Freedoms
AI technologies pose significant threats to fundamental human rights, reinforcing historical biases and power imbalances. This week, we are joined by Susie Alegre, international human rights lawyer and author, to explore the impact of generative AI on gender and racial equality, labour markets, and information ecosystems.
Susie has worked for international NGOs like Amnesty International and organisations including the UN, the EU and the Council of Europe.
Susie has published two books covering the critical issue or technology's impact on human rights; “Freedom to Think” (2022) was a Financial Times Technology Book of the Year 2022 and shortlisted for the Royal Society of Literature Christopher Bland Prize 2023 and “Human Rights, Robot Wrongs: Being Human in the Age of AI” published in 2024.
The episode covers;
- How AI systems, like ChatGPT, perpetuate gender and racial biases
- The "Pygmalion" pattern in AI design
- Potential longterm effects on skills, education and social interactions
- The rise of "ultra-processed information" and its consequences for the internet
- Legal risks and the role of effective regulation
- Enforcement in addressing AI's human rights risks
- When AI applications may be valuable—and when they are not
📅 To learn more about Susie Alegre’s work, visit:
🔗 susiealegre.com
🔗 alegre.ai
🔗 CIGI Profile
📩 info@susiealegre.com
📷 @susiealegre
💼 Linkedin: Susie Alegre
Support us on Patreon
Advertising opportunities
Click here to submit questions, nominate guest & topics.
Follow Humanism Now @HumanismNowPod
Humanism Now is produced by Humanise Live
Contact us at hello@humanise.live
Susie Allegre is an international human rights lawyer, author and thought leader at the intersection of technology and human rights.
James Hodgson:A patron of Humanist UK, susie was one of the keynote speakers at the 2023 Humanist UK convention, where she delivered a highly thought-provoking talk drawn from her first book, freedom to Think.
James Hodgson:Freedom to Think was named the Financial Times Technology Book of the Year and explores the history and ways in which technology has encroached on our most essential human right, the freedom of thought, while raising important questions about how these dynamics could evolve in the future. In her latest book, human Rights for Robot Wrongs, susie focuses specifically on the technologies driving the current boom in artificial intelligence and examines how these powerful advancements are both shaping our world and posing significant threats to fundamental human rights, including the right to life, liberty and free expression, and she argues for an urgent need to protect human rights in this digital age. With over 20 years' experience working with international NGOs such as Amnesty International and organizations like the UN, the EU and the Council for Europe, susie is a leading voice on these critical issues. She's also the founder of the Island Rights Initiative and a senior fellow at the Centre for International Governance Innovation. Susie Allegre, thank you so much for joining us on Humanism Now.
Susie Alegre:My pleasure. Thanks for inviting me.
James Hodgson:Our pleasure to have you and, as mentioned, we first met at the Humanist UK convention last year and it's been a pleasure to get to know you and also read both Freedom to Think and Human Rights Robot Wrongs. I can highly recommend both books and I wonder, just to start with introducing the themes that you touched on Human Rights Robot Wrongs, one of the core themes that is present throughout the book is how AI reflects an ongoing historical gender bias in technology, and I wonder if you could start by just outlining how you uncovered that and also this one anecdote about what happened when you asked ChatGPT, who wrote your first book, freedom to Think.
Susie Alegre:Yeah, absolutely Back in early 2023, which seems like eons ago when chat GPT was a bit of a novelty and everybody was trying it out, and certainly in my kind of sphere. One of the classics was that people were using chat GPT to write their biography. So I, as part of my research for this book, went in and said, who is Cece Alegre? To which the answer was essentially she's a nobody and we can't find anything on her. She must be a private individual. So I then went back and said okay, so who wrote Freedom to Think, which was my first book? And what I got was I think the first option it gave me was an Australian biologist, a man, clearly. So I looked him up to check if he'd written a book of the same name, which he hadn't, but plausibly, he might have been the kind of person who might have written a book called Freedom to Think. I went back and said are you sure? I don't think he wrote Freedom to Think, and it then said oh no, I'm terribly sorry my mistake.
Susie Alegre:And then it gave me another option and another option, and this sort of went on.
Susie Alegre:It gave me over 20 names, of which only one was a woman, and it was a woman who both her first name and her surname could plausibly have been men's names, so essentially, from a probability perspective, she was like a man squared looking at her as the likely person to have written something profound about thought.
Susie Alegre:So it was very clear that the chat GPT the likelihood that a woman and in particular me might have written something thoughtful, interesting, intellectual, was almost zero really, and that really alerted me to the fact that women are essentially being written out of their own stories with this automated generative approach to information which has nothing really to do with the truth.
Susie Alegre:It's just what's likely when it comes to women and the historical biases against women. What's likely is the answer is often not a woman, and you'll see it if you've seen any of the research looking at image generators. If you ask them to produce an image of a pilot or an engineer or a scientist and then you ask them to produce images of a carer, a nurse, a nursery school teacher, you will get absolutely classic stereotypical images, both from a sort of gender perspective, but often also from a racial perspective, from a sort of ethnic background perspective, and so what you can really see is that generative AI is not really generating anything more than established bias and stereotypes in ways which you know as humanity, particularly in a country like the UK where we have a Human Rights Act, we have an Equality Act. We're trying to move our society to better reflect equality and diversity, and generative AI is really a step back. It takes us straight back to the 1950s.
James Hodgson:Yes, and you draw on examples going even further back in terms of this sort of age-old or many powerful men's quest to replace women, almost.
Susie Alegre:Absolutely. There's a group of researchers in the Netherlands who've talked about this as looking at technology through the Pygmalion lens. And so Pygmalion, who in ancient Greece, found women's bodies so abhorrent that he fell essentially have children and have a love life, a sex life, without actually having to deal with the messy and horrific reality of real women. And that is something that we see in the way that women are portrayed and the way that sort of the female form is portrayed through modern technology, both through sort of voice, where we see things like Alexa, so servants, if you like being given women's voices. And we've seen it with the controversy around the reveal earlier this year of the latest generation of chat, gpt, which sounded remarkably like Scarlett Johansson in her, which sounded remarkably like Scarlett Johansson in her, despite Scarlett Johansson quite vocally objecting to her voice being used in that kind of context. But it's very much about this you can have sexy interaction with female tech, if you like, without actually having to deal with real women and everything that they potentially represent.
James Hodgson:Yes, as mentioned, it's almost ubiquitous that most of these applications have a female voice, which is, she points out in the book, just one of those historical things. And again, ai continues to reflect the dominant biases Western biases in all of this training data, because it is trained on what's on the internet internet and that's obviously been shaped by what's gone before it. This question of bias is obviously central, I think, to most of the risks that are highlighted within the book. What steps do you think can be taken now to make these systems more inclusive and equitable?
Susie Alegre:I think it's difficult. There's a lot of sort of narrative about de-biasing the data, all these kinds of questions. I think that slightly misses the point in many ways, because de-biasing the data doesn't then address issues of moral rights and information. It doesn't address issues of privacy. It doesn't even really address issues of that sort of fundamental dignity questions. I'm not sure that de-biasing the data necessarily gets you as far as is often billed. I think it's a useful question, but I think the bigger question is whether there are uses for which AI or specific types of tech are just not suited. So, for example, filtering CDs and we've seen this problem of bias in technology that's been developed to filter the massive amounts of CDs which are now ironically being generated by generative AI, cvs full of rubbish being filtered by biased systems. I think there's a real question about what you should be using technology for in what circumstances, rather than necessarily look at de-biasing the technology or de-biasing the data.
Susie Alegre:But I think one way of de-biasing the trajectory of technology in our societies is to ensure diversity of thought and diversity of representation in the tech sphere in the people working in that area.
Susie Alegre:One example I remember talking to a group of people who were all European professional men talking about digital ID and they were saying, no, but digital ID it's fine because you can make it completely secure, so that you only get access to it with your consent. And I said, but what does consent actually mean if you are, for example, a 13-year-old girl in a refugee camp in East Africa? What exactly does consent mean If your dinner or your safety depends on this question? And that is something where you know, when we're sitting in the comfort of our suburban homes enjoying the ability to talk online on a podcast, we don't really understand what consent might mean for the large numbers of people who might be affected by it. I think that's why diversity is really key in the tech industry is to flag things that may be really obvious to 90% of the population and completely, while those actually developing the technology are completely oblivious to this kind of reality.
James Hodgson:So do you see, then, that perhaps the technology is more highlighting and exaggerating or bringing to attention the biases and these blind spots that we have in society already and so potentially it's allowing people to, should perhaps encourage people more to make not only course correct the AI, but also course correct other systems and technologies in our society.
Susie Alegre:Yeah, I mean it is effectively an automated system of bias. So it's taking what exists in society and amplifying it and it also reflects questions of control, questions of power balances and power imbalances in our society and effectively allows this sort of corporate capture increasingly. I mean, I mentioned, in the work space, but also in our personal lives. It's taking the ability to engage and capture and manipulate our inner lives as well and industrializing that.
James Hodgson:Yeah, absolutely, and you touch in the book on several of the core areas where this is already having an impact and where it's likely to have an impact, looking at relationships, care, theological beliefs, even that this is impacting on. I'm particularly interested as in the impact on the future of work and employment. I think this is obviously one of the big themes that has been publicized since, effectively, chat gpt2 came out and this became a huge topic. Is this going to replace all of our jobs? I think in the book you draw up more of a nuanced argument that, beyond job automation, there are deeper risks posed by AI when it comes to our autonomy in the workplace. Could you just draw out some of your views on how these technologies are going to affect worker rights and, again, what we can do to protect against exploitation and bias?
Susie Alegre:I mean there are several ways. If you work today, you will have been affected or certainly have been involved in discussions around automation and AI in your workplace in some form or another, and I think it raises a lot of concerns about. There's one fundamental question that it raises for me, which is always to ask well, what is the point of the task that the AI is supposed to replace? What is the point? The people that I've heard talking most positively about AI and using and when I say AI in this context, generative AI people talking about using generative AI in their work to improve productivity. A lot of the use cases that I've heard are people saying well, it helps me to send out the kind of tedious emails that I have to do in a polite way or whatever. For me, that raises the big question of well, what was the point of those emails really? Were they necessary? And so for me, looking at automation in the workplace in that kind of context, wouldn't it be better all around to then look at what are the tasks in the workplace that are actually completely unnecessary and superfluous, that have become part of the daily grind of work? And so I've heard that from people like teachers or civil servants and in spaces where suddenly administrative tasks have overwhelmed the core tasks, if you like, of the job.
Susie Alegre:In the legal space and I'm a lawyer one of the big things has been talking about analyzing the law. That, I think, is hugely problematic, not least because the analysis is generally incorrect, but can be delivered in a tone that is extremely convincing, and we saw last year, particularly in the US. There were several cases of lawyers using chat GPT to fill in their legal knowledge to produce submissions to court, which included entirely made up case law, and when questioned on it, instead of saying really sorry, I got ChatGPT to do my homework, they doubled down and then went back to ChatGPT saying, oh, I can't find this case, is it definitely real? And ChatGPT says, oh, yes, and here's an excerpt. They then went back to court with completely made up excerpts of completely made up cases and in the case that I referred to in the book, the lawyers involved were fined and severely reprimanded. In a later case, I saw lawyers being suspended.
Susie Alegre:I think there are really big questions about the reliability of generative AI in general that even researchers found, looking at generative AI, that was trained specifically on federal US cases, so not on the totality of the internet, but a sort of narrow scope of legal information still got the analysis wrong in 60 to 80 percent of the times. Now, lawyers are fallible as well, but if you were getting your analysis of the law wrong 60 to 80 percent of the time, you might well find your professional indemnity insurance going up severely as a human lawyer, and I think this impression that you can outsource to technology tasks which are essentially about human analysis and which are complex and nuanced is quite dangerous, and that then transfers across into, as you mentioned, the care sector, where, you know, in East Asia, particularly in Japan, they put a lot of money into developing care robots and the ways that automation might support care work with a real sort of demographic crisis and aging population. But what researchers have found, looking at that, is that in care settings where robots have been bought and deployed, that what happens is, more often than not, they land up being stuffed in a cupboard and not used, because essentially, the human carers become carers for the robot rather than carers for the people, and so, rather than improving their workload and making it easier, it just makes it more boring and more grueling and takes that human aspect out of it which, while really challenging, is also something of a calling and something that people actually enjoy, that kind of human connection. So it's really reducing everything to the drudge work and that's the same.
Susie Alegre:Going back to the legal example, what you'll see I've seen a lot of articles about oh yes, you can use it, it's great for productivity. Of course, you then have to check it. By checking it, that means you have to look up everything again. That's really boring as compared to doing the analysis yourself. So it's sort of effectively changing the worker into someone who is servicing and checking the homework rather than doing the interesting, engaging, intellectual and emotional work themselves fascinating.
James Hodgson:Yeah, there's so much in there. I think that there's possibly a whole other podcast or perhaps another whole book, susie, on the cult of productivity, as you mentioned, and I've heard it referred to as toxic productivity, this idea that the aim of the worker should be as productive as possible and, as you say it many times, just creates more work, and sometimes it's a case of what was the purpose of the actions.
Susie Alegre:My favorite, absolutely pointless suggestion from a technologist was a suggestion I can't remember which company it was. A CEO of a company saying it's great because soon you'll be able to send your avatar to meetings with other people's AI avatar for virtual meetings and he's like, why not just cancel the meetings? I mean, what is the value of a meeting between two AI officers? It really flags the fact that maybe you just didn't need that meeting at all yeah, definitely highlights.
James Hodgson:I think a lot of corporate communications probably can be done in a chat or not at all. Yeah, and I have heard of people who attend three meetings at a time because they send different ai note-taking apps to each one and then just review them afterwards.
Susie Alegre:Allegedly reviews afterwards.
James Hodgson:Quite. Yeah, it was a section on care that I think surprised and, I guess, saddened me the most, because one of the thoughts I had when I guess working hypotheses about AI automating a lot of these usually office-based tasks was that perhaps there'd be a shift in emphasis in the workforce towards more interpersonal, you know, the more valued skills would be the interpersonal and, as you say, the analytical and creative, and that actually AI can just automate the repetitive tasks or the things which, as you say, don't require too much creative thinking, empathy and understanding. So to know that it's being utilised in the care sector as well was quite, I think, quite sad. Agree, I think that I can't see how it really helps move that sector forward no, I think there probably are opportunities for automation in the care setting.
Susie Alegre:They're just not the ones that are being developed. So, for example, helping to lift people who have mobility issues, that is dangerous, dangerous, back-breaking work. It is really difficult and there's a whole debate about automation and needing automated assistance or mechanical assistance or some of those tasks. But instead of developing a robot with a smiley bear face, why are we not looking at and maybe some people are looking at things like exoskeletons for care workers to help them do the work themselves? And I think that is the problem that you'll see. A lot of the technology is about replacing stuff rather than enhancing the human ability to do the work or to make it safer for the humans to do the work. It's about replacement.
James Hodgson:Yes, which again is in contrast with this idea of productivity. You're not being more productive if the human's being replaced. And, interestingly as well. On the point about the inaccuracies and the confidence, usually when I speak to people in, again, professional services industries, where they've warned is that they worry about the pipeline of staff, because I feel like the feedback is often the senior people can't be replaced Potentially, where you would have juniors or mid-level people drafting advice or drafting reports or doing some analytics. You can now have that done in a matter of seconds by an AI, but it would always need to be reviewed by someone who's been in the industry for long enough to know where it's, know the cases, check what's true and what's not, because, as you say, it speaks with so much confidence regardless of whether it's completely fabricated or it's. So I think again, what you draw out in the book is probably hedges against that and says you are still going to need to have that pipeline people who genuinely understand what they're writing.
Susie Alegre:We are and I think one of the challenges I mean I was speaking to someone from a sort of national administration on the UK who raised the big question of professional qualifications and the real challenge of people using generative AI, a sort of students using generative ai to pass their exams, for example, and then landing up with a whole cohort of people who are really not safe to be, you know, in the medical professional, the legal professional, whatever it is.
Susie Alegre:And so I think there's a real danger as well at that lower level, of people being de-skilled or not acquiring the skills properly in the first place.
Susie Alegre:And there's some interesting research that's starting to come out and obviously, you know, certainly widely available generative AI is a really quite new phenomenon. But looking at the impact of using generative AI on people's ability to come students, ability to pass exams or to deliver their assignments, and finding that the use of generative AI massively boosted their ability, but that when students used generative AI and it was then taken away from them, that significantly reduced their ability below their original baseline. So the reliance on generative AI is effectively removing their innate ability and it's something that we've seen already. There's quite a lot of research already on the impact of GPS, for example, for example, on the brain formation and our innate sense of direction. So if you use gps very heavily, regardless of what kind of sense of direction you started off with, that is likely to be reduced, which means that you're then entirely reliant on the technology, because you take the technology away and you're lost.
James Hodgson:It's so true that I've had exactly that experience Tried to find my way through London.
Susie Alegre:I try very hard not to use it.
James Hodgson:It's slightly off topic, but do you think, then, we're potentially heading towards a time when so much of what we see online will be AI generated that maybe future generations will just view the internet as a majority fictional world? And actually you can't trust the information that's online at all, and perhaps in order to pass exams or toward in order to understand something, you'll need to be able to actually verbally explain it, and it will go much back to a culture of dialogue rather than the written essay I hope so.
Susie Alegre:I think that there's a lot in that question, I mean on the internet. I do have the impression that we're coming to a tipping point in the internet and social media as we know it, not least because, apart from you know, information being unreliable, which it has been for a long time but is now just supercharged unreliability, mass automated unreliability. But as well as being unreliable, it's also increasingly boring because it's automatically generated content which people ultimately don't really want, don't really engage with, and it's being generated to make money. And I saw I can't remember which channel it was on, but there was a program on the TV about people from different professions looking at the future of their professions and one of them was a social media influencer and when asked whether your job could be replaced by AI, he rather bizarrely assumed that as a social media influencer he couldn't be replaced because it was about the personality. And you're thinking. I think you might want to look into that a bit more deeply Because clearly, actually, the assessment from experts in AI from the Alan Turing Institute was that that was probably the most replaceable job by AI, because we're already seeing AI supermodels and, yeah, ai influence the accounts, but ultimately the content just becomes boring because it's not about genuine engagement. People want other people. Ai might be interesting for five minutes, but it soon wears off.
Susie Alegre:So I think what we're going to see is a real shift in what the online environment is, what it means, where the money comes from, because certainly I have a sense I don't know about you, but I have a sense that people are dropping off social media platforms quite significantly because it's boring, and that will really change, I think, the face of the business models on the internet and the internet that we have. I mean going back to the other part of your question of what's the answer, particularly to the students or people wanting to gain knowledge that increasingly you can't get on the internet or even in scholarly journals. This year, wiley decommissioned I can't remember exactly how many, it was 16 or 19 journals, scientific journals, because they found that they were just full of AI generated rubbish, and those are academic journals. So where do you get the information?
Susie Alegre:I think what you're saying about dialogue is really key and certainly from an educational perspective, I think we're going to see schools and universities being split and probably being split on resources, that you'll see half of them desperately following the productivity train and trying to catch up with automation, which will be slightly doomed and probably won't really benefit the students and those that are more heavily resourced so sort of private schools, universities that have massive funding going into a more elite bespoke dialogue method of teaching, understanding and analyzing. So I think we'll land up with two tiers of education the automated education tier and the dialogue, bespoke education tier, and I think that is really problematic and something that certainly policymakers need to be thinking about very carefully.
James Hodgson:And again we're back to biases and divisions and power structures in society as to who has access to information.
Susie Alegre:Yeah, completely. It's ultra-processed information like ultra-processed food.
James Hodgson:This is organic, I think, great way to look at it. Yeah, it's a great analogy. Coming back to this point on the case of the law firms who are using AI and doubling down on the case law that had been fabricated and you highlight many other just really fascinating and shocking examples in the book how should we think about accountability when AI causes harm or leads to malpractice, I guess you could call it? How do you see the legal systems evolving to ensure accountability? But is it with the end user or do you see more accountability falling with the providers of these platforms?
Susie Alegre:It's a complicated question and at the moment, again, looking at the power structures, means that the big AI companies, big providers, are in a sort of in many cases in a more or less monopoly position to be able to dictate their own terms and push liability onto the end users. That's certainly how it seems to be evolving on a contractual basis, but how it will evolve in terms of the law of tort and how these things develop legally, I think is going to be extremely interesting. And when you look at the example of LawTech, I think it's just a matter of time before a law firm finds themselves professionally liable for whatever's happened as a result of them relying on the tech that they have bought in to maximize their productivity, and that law firm then turning on the technology firm in order to mitigate or to push liability further upstream. So I think it is just a matter of time. I think it's a very complicated area and I think that liability on tech companies, on the developers, designers, purveyors, salespeople of tech will change the sort of the direction of travel of technology.
Susie Alegre:I think liability really focuses minds, particularly in cases where you're talking about criminal liability as opposed to simply paying fines. One thing is regulatory liability Another thing is civil liability for problems at scale. I think we're likely to see big class actions as well happening as technology is deployed across our societies. I think we are going to see liability finally coming back onto those who are designing, developing and selling the technology. But it will take time designing, developing and selling the technology, but it will take time and I think that will really focus their minds on the direction that their technology should take.
James Hodgson:Yeah, and from a regulatory standpoint, how do you see the environment shaping up and we've had the EU AI Act come to force this year. Do you see that as adequate or a good start in terms of addressing some of these risks?
Susie Alegre:The EU AI Act. The EU AI Act is a massive piece of legislation. What it's actually going to mean in practice, I think at the moment is slightly anybody's guess, having the risk-based framework that it has. I think it's going to be very interesting as to how those kind of risk assessments are done, because saying to somebody does your product pose a high risk to human rights and democracy? Frankly, if you're not an expert on human rights and democracy, you probably don't know. And going back to the point I was making earlier, that if you're living your comfortable life where nothing bad has ever happened, you're unlikely to really understand what could possibly go wrong with your technology and therefore you're probably not best placed to be doing that kind of risk assessment. So how those risk assessments develop, I think, is going to be interesting.
Susie Alegre:Another big question is where it's particularly. Tech used in the public sector is a really big focus, but then increasingly private sector tech is being used in the public sector is a really big focus, but then increasingly private sector tech is being used in the public sector. So if your technology is private sector technology but that's going to be used in a public sector application, then you're going to see higher levels of requirements from the EUAI Act. I think it will have a global impact, as GDPR has. What exactly that impact will be remains to be seen and, as with any regulation, the real nub of it comes to the funding and enforcement powers of regulators and regulators' ability to really deliver. You can have the best regulatory framework in the world, and if you don't fund the regulator, then nothing's going to change, yeah, and going forward.
James Hodgson:what would you like to see? Of course, that the regulation has some teeth, but also, are there any recommendations that you can make in terms of balancing the risks and the rewards of ai? How can we utilize this technology whilst protecting vulnerable groups and ensuring that we're mitigating these many risks that we've discussed today?
Susie Alegre:I know that's probably a huge question, but if there's some key points that you think we need to consider it, is a huge question, but I think, looking at it from a UK perspective and there are similar bodies, national human rights institutions in countries around the world.
Susie Alegre:But from the UK perspective, for example, we have an Equality and Human Rights Commission which was built up in the early 2000s and is the regulator for the Equality Act and the Human Rights Act, both of which are hard law, legal frameworks in the UK and which apply to analogue systems companies, businesses, governments, activities but also to all of those activities when they are mediated through technology.
Susie Alegre:So, from a UK perspective, I would say make sure that the Equality and Human Rights Commission is equipped with both the regulatory chiefs, but also, really vitally, the funding to be able to take on relevant cases under the Equality Act, under the Human Rights Act, to focus on that aspect of things, rather than looking at it from a data perspective through the ICO or looking at it from an Ofcom perspective Actually get the Equality and Human Rights Regulator in a place where they can effectively engage in these discussions and bring their expertise and that legal perspective. So that would be, I think, probably my biggest recommendation, rather than creating something new with a bunch of people who are all very excited about AI but actually don't have the grounding, don't have decades of experience looking at issues of inequality, looking at issues of human rights abuses, and for that you need the equality and human rights regulator to be really actively involved.
James Hodgson:Absolutely. Yeah, Suzy, you've been very generous with your time. Just before we wrap up, I just want to look ahead to the future. I just wondered in researching this book, were there any positive applications of AI that you found, particularly around these topics of promoting human rights, or that align with humanistic values? Have you found any areas in which it's been used for good?
Susie Alegre:It's very difficult because whenever you look at one of these areas as a lawyer, you always have to say yes, but, or it depends. It's always qualified. And one of the things I found writing the book and in responses to the book is I've heard a couple of times people say, oh, but you don't talk about all the wonderful positives of AI, and it's less because I'm not selling AI. It's not my job to sell AI applications and one of the challenges when talking about AI is that it's a million different things. Your chatbot that prevents you from actually speaking to anybody at your bank is a completely different thing to the image recognition AI that identifies early stages of cancer. They're just not the same thing. And there's a real push in the discourse, particularly around people who want to develop AI, to say you can't talk about the negatives without the positives, but actually you're talking about very different things. So there will be really crucial developments in AI, particularly in the medical sphere, but it's not going to be a chatbot doctor. I think that is the key. That it just saying AI and medicine fantastic, but what AI in what areas of medicine? What is it doing? What is its interaction with doctors? And I don't really have a glowing example of AI will save us all, but the same goes for AI in the legal space and in analyzing data and information. Getting AI to write your legal submissions I would suggest maybe think twice about that. But getting AI to filter thousands and thousands of documents for keywords maybe that's a good way of using AI in a legal context. So I think it's always really context specific and that is the key.
Susie Alegre:If you're thinking about using AI in your work, ask what exactly is the point and is this the right tool for that. Not just because you've got AI stuck on the side means that it's going to be good for you. What exactly is it for and how exactly is it helping? So I think there are undoubtedly places where specific kinds of AI will radically improve human life. It may as well be things like AI analysis of impacts of climate change or AI analysis, for example, in areas where there have been massive human rights abuses. Analysing the images, for example, could be extremely useful. But you can't just say AI is great for medicine, because it depends what AI, what medicine, what purposes. I think that's not a straightforward answer.
James Hodgson:Absolutely, but hopefully it calls for another book and hopefully it calls for another appearance on the podcast as well. I can only extremely highly recommend both Freedom to think and human rights. Robot wrongs and it's interesting you mentioned there could potentially be some positives in climate change. But another area you touch on in the book is on the environmental impact of ai. You know of these hugely powerful systems, so even if they were being used for good, that that creates a whole new set of problems in it yeah, if they're being used for everything, then they're going to just exacerbate climate change yes, it comes down to the application, the use and, I guess, comes back to the human intentions behind what you're using it for.
Susie Alegre:So that's how we should encourage, and I think the bottom line is we don't all need to be using ai for everything, and if we are, it's mutually assured destruction.
James Hodgson:Yes, and, as you say, it removes human from the whole process, and then what's the point really? Yeah, so, before we go, I'd just like to ask you a standard closing question, which is what's something which you've changed your mind about recently?
Susie Alegre:One thing I changed my mind about while writing the book is actually a positive use of AI. That I mentioned, actually, in the acknowledgements that I did use dictation software at a certain point in writing the book because I had a frozen shoulder, no doubt through RSI, from too much typing, and so I shifted to handwriting parts of the book and then dictating it into the Word document and I was really impressed. One thing I'd been worried about was that I found that I can't just speak straight into writing, that the way we speak is not the way we write, and I found with my first book that there was a big difference in the quality or the type of writing that I can do when I'm writing by hand with a fountain, pen or typing, and so I found it really shifted the way my mind worked and the outputs Fantastic.
James Hodgson:We're grateful for your work and very much looking forward to hearing what comes next. Susie Le Gray, thank you so much for joining us on Humanism Now.