
Sunny Banana
I am a school chaplain and the content is intended to encourage curiosity about Faith and it's impact on day to day life
The Sunny Banana, is a play upon the Zulu greeting, Sanibonani, meaning I see you.
As tech wrenches us from real life, we are not seeing each other. The Greek word 'idea' means to see. It is as if we have lost the idea of what it means to be human; social, communal, relational. The same word, to see, in Old English is 'seon' which has connotations of understanding.
Let's start seeing each other again, listening, respecting, and understanding each other and ourselves. After all, we are people through other people.
Sunny Banana
AI Ethics: The Human Behind the Code
so here we are, guys. We are live. This is the podcast on AI and ethics for the AI week that we are celebrating and exploring this week. I have two wonderful guests on this program today. Today they are Boris Bland and Evie Moore. Thank you, guys, for being here today. Thank you, sir, thank you and today we're going to explore AI and ethics, so let's not beat around the bush and go straight for the question. The question that is in everyone's mind perhaps is is it a good thing or is it a bad thing?
Speaker 2:Well, I think obviously it has lots of positive uses, but then also with what we've seen recently, like supermarkets using it to hire their employees and only suggesting male employees, and its bias and use for bad rather than good Kind of contrasts between the two, yeah. So I'm sort of conflicted on whether it's good or whether it's bad.
Speaker 1:So you on the fence. At the moment there's some good, there's some bad. Not quite sure if it's a net positive or a net negative. Evie.
Speaker 3:I have sort of a very strong opinion that AI in its general sense as a programming tool is useful, but depending on who is, you know, giving it programming and who is creating it will then affect how that AI is able to learn and will operate as artificial intelligence, because it, you know, even with people, it's whoever you are created by and or raised by, and the environment around you will influence your decision making. And that's the same with AI, because who has programmed them and who is giving them commands is going to control almost, or at least influence, how they will later decide and or suggest when they are asked questions yeah, that's an interesting concept, isn't it?
Speaker 1:so the you said earlier about it depends how you are created or nurtured as a human being will depend on how you are in life, and therefore this ai thing is also a creative thing, isn't it? It's not something that just pops out of nowhere. It comes from somewhere, and what you're saying is it depends on the. The user or the creator will depend on how good it is or how bad it is. I guess any technology is like that. If you think about a knife, for example. Pretty useful thing, a knife.
Speaker 1:I like to butter my bread and that's really nice but a knife could be also used for very negative reasons, as we know and we hear in the news. So it's the user that counts. What about, if, I ask you, ethics? Is um in in the realm of morality, honesty, fairness, equality, integrity and all these things and and you guys as students. You guys are in this world now you right in the middle of it AI and exams, and AI and work.
Speaker 3:How does it?
Speaker 1:affect or are there pressures and what are the temptations or the pitfalls and therefore also the opportunities? Maybe talk about the pitfalls of AI when it comes to ethics, because honesty and integrity in your work and what you do.
Speaker 2:Well, I think definitely with how AI works. Just taking machine learning from online and information sources definitely makes you look at it from an ethical standpoint, whether it's good or bad. And then, as you say, going on to things like writing and uh, you know, art, that kind of thing is it really someone's work if they've got something to do for them? I think it's definitely very interesting to look at it from that perspective yeah, do you think it gets the the truth out of us?
Speaker 1:no what do you mean by that, vivi?
Speaker 3:because AI can be used honestly, but AI can also be used dishonestly and unethically.
Speaker 1:Yeah.
Speaker 3:I mean if people, for instance, are using it to achieve an exam or they're using it to get their homework done, that's unethical because it's being used for an unhonest person. It's being used for an unhonest pursuit, and then it's not exactly the AI who is ethically wrong. It's more, if we are using AI as a tool, then how are we using it in an ethical way? Rather than is the thing itself? Is the tool itself an ethical thing? So it would completely come down to are we willing to use this ethically or are we going to use it unethically, even though we know it could have a negative outcome when we decide to use it?
Speaker 1:I guess this, the temptation for anyone working, really is the shortcut and the honesty behind our work and the integrity behind our work is there's a big temptation there to if we are getting the shortcut, we can do this stuff in a fraction of the time it takes, compared to sitting down and writing out something. Is it having an effect, therefore? Could that affect us of how we think and the future of thinking?
Speaker 2:I think definitely with these new AI checkers and that kind of thing. It definitely takes away from people just thinking, oh, I can use this as a shortcut to forge my work instead of doing it myself, because it kind of blocks that you can't just go out and get AI to do everything for you, because it will be clear that you haven't done it yourself. But then again, while still sitting on the fence. But then again, while still sitting on the fence, there are ways people can, you know, go around that, which I think is definitely dangerous in the sense that it will have an effect on people's productivity in the long run.
Speaker 1:Yeah, one concern if I may, in the long run, yeah, one concern, if I may, is running out of patience, because we're so like, oh well, google it, let's get this information now, and then our patience runs out with each other and our relationships in real life. So so, moving on. So what do you feel about the way AI has influenced public opinion and elections and so on and so forth? Would you like to comment on that? Not only in the UK and the US, the wars going on. What is how AI is able to persuade opinion?
Speaker 3:I think it's probably through the spreading of disinformation.
Speaker 2:Yeah.
Speaker 3:You know, because and even if there really isn't any disinformation, it can be sort of point the finger this is not that because AI did it or this election result is not real. It's false because AI did it or someone changed it using AI. So it can very easily become a weapon to be used in politics, where if someone decides I do not agree with the will of the majority, then I'm going to accuse or use AI to allow me to shift that or take more control over it.
Speaker 1:Yeah.
Speaker 2:Thank you Boris you want to comment there? Yeah, in the US, especially in the election last year, there was a lot from both sides where people campaigning for either party were putting statistics or government documents into AI, and I think the danger with that is, even though that's not misinformation as such, it's taking very small parts, without context, of different things, which then almost is misinformation, because it's essentially, as Evie said, weaponizing that information for their advantage yeah, and I think I come back to that patience idea of watching a whole speech of someone.
Speaker 1:You know, did you hear what X and X said? You heard what that person said, but people are getting these snippet videos of what they said and then you know it's inflammatory and it's taken out of context. You need to like a book. You don't know what a book is until you've read the whole book, right? You need to like a book. You don't know what a book is until you've read the whole book, right? If you've just written like the first chapter, you can have an opinion on the book, but you can't really have a full opinion on the book. So it's just that, that snippets of information in AI.
Speaker 1:What are your opinions on? When you ask Siri or Alexa or whatever these things, about ethical things like what should I do about my relationship? You know you ask a question to Siri Siri, is it right to eat meat? You know these ethical questions and what are your opinions? Can you comment into what Syrian, alex and all these things should be given back? Even when you type it into JET, ttp or whatever it is you say should I eat meat or not? What is this role there?
Speaker 2:I personally don't think any of these AIs or things that use machine learning to respond should be allowed to comment on those ethical situations as you were saying, as they're obviously not human. They don't know how humans work essentially. So I don't think something should be able to comment when it doesn't understand the backlash that may have emotionally or the way a human would respond to that, are you?
Speaker 1:saying that it should say Boris, boris, go speak to another human about this. Yes, because I don't have an opinion on, because I don't eat meat. Yeah, I can't even eat.
Speaker 2:Yeah, we know I. I don't have a relationship, so you know, can't comment on a relationship. It shouldn't be able to tell you something. Yeah, relationship, it shouldn't be able to tell you something, yeah, when it can't fully understand it. Yeah, because it may just be giving you the best.
Speaker 1:Yeah, logical opinion, right interesting, logical, yeah, that's interesting. But they're saying there that ai is becoming more and more, um, human, like, as it were, more like thinking for itself. That's the whole, that's the big idea that one day it's going to think for itself, you know, and we think of those movies. What's that? What's that? What's the world's anyway, irobot, and they start thinking for themselves and doing having a mind of them, their own. Evie, did you want to comment on the Siri Alexa advice.
Speaker 3:Well, I think we shouldn't use those devices as a sort of moral blocker, because it's entirely up to the person themselves whether they decide to make a moral decision or not. Because if you're going to rely on artificial intelligence as a way to solve all your moral quandaries, you're eventually going to become unable to do that yourself.
Speaker 1:Yeah.
Speaker 3:Especially if you are in a situation where you can't rely on AI. Yeah, so you know it really shouldn't be done, and you shouldn't also expect an AI to give you their own sort of opinion because they don't have one.
Speaker 3:I mean everything they have is based off their exposure to the internet, coding, the environment and the people that are sort of implementing things into them, and they are a learning body. But they also have to have a very open, wide range of information coming in to allow them to answer that question. So if you decided to ask instead AI what are the positives and the negatives of becoming a vegan, you might get a more nuanced response and you might get a more balanced argument, because they would have taken information from both sides of that argument and they would have made it into a more simple version. But that isn't their own opinion, because they don't have one.
Speaker 1:Yeah, that's very good. I like that. It's about what we ask of AI. That matters, how we ask the question and that kind of information we want to get out. So, in summary, let's look at this, what we've looked at. We've looked at one. We don't think it's in itself, a a bad thing or a good thing. It depends on, um, the user. Okay, which links to just the point that I spoke about now is about how we ask those questions, how we use it. Okay, um, if, if we are unethical, if an ethical force or being is using ai well, the product and outcome is going to be unethical, but if we approach it with integrity and honesty, it's a very powerful tool for progression and development. Thank you, evie, for that last point. I like it so much that it's the kind of questions that, boris, you highlighted there that you don't go to this to understand how to be human and to be ethical. I'm reading a book with my daughter, three-year-old now People Need People. It's called yeah.
Speaker 1:Any last comments for our listeners out there before we wrap it up.
Speaker 2:Evie.
Speaker 3:I think we should stop viewing AI as almost an adult. I think we should view AI as a child.
Speaker 1:Right.
Speaker 3:In its infancy, because the technology itself is so very young.
Speaker 1:Yeah.
Speaker 3:And the fact that it's able to learn sort of makes it more childlike. As you would with a child, you have to be very careful what information you give them, what questions you ask of them and what sort of expectations you put on it.
Speaker 1:Yeah.
Speaker 3:Because if you start expecting AI to sort of, in two years, become what we think of when we think of sci-fi, dystopian movies, it won't happen that quickly because it's going to take so much more time for the technology itself to develop to a level that you know it can be used constantly throughout society without the sort of when we already know the downsides of it.
Speaker 1:And do you want that to be the best of human wisdom to be passed on to this intelligence?
Speaker 3:Yeah we need to be very careful sort of how we decide to use this. It should be the positives instead of the negatives that we try and foster within it. You know, people, they're using AI in the wrong way, but you can use AI in many beneficial ways. Like AI do my taxes. Ai give me the positives and negatives of this argument from both sides, from the internet and a lot of the time it's just really quite silly stuff which is like AI make me a picture of this, yeah, and that's all sort of harmless relatively, and it should stay that way, because if we start to use ai for more serious things, we sort of strip away the human morality, yeah, and just have a complete computer program brilliant it will not feel yeah, it just I like that, I like that, that To see it as raising a child.
Speaker 1:Thank you, that's brilliant, evie Boris. A last comment for our listeners.
Speaker 2:I think you should just always remember Whatever it spits out when you give it a prompt Is Never going to be 100% true. Because Because, well, there are experiments online that you can go and watch, where people get another AI to tell you know an AI they're using enough times that you know, for instance, two plus two is three that it will then start to believe that and give that back to you. So I think you should just always remember it can always be twisted and weaponised and although, yes, there is a lot of good in it, you should be wary of the bad. Brilliant thank you.
Speaker 1:So it's. We could make it think what we want to think and in the same way we. Ai is only going to be as good as it can be if we are giving it the good and the best of humanity that humanity has to offer. Thank you, guys. I really enjoyed that chat and the thank you, howard Howe, for the wonderful food that we've been enjoying during this podcast. I hope you guys out there have a wonderful AI week. Remember, keep it human, keep it good and people need people.