CTIO 101 Podcast
CTIO 101 Podcast
The problem with AI
getting the. Bare bones components to do in quotes, some AI is, is, is really easy. But, but doing it, uh, well and doing it in a way, uh, that, uh, won't lead to unintended consequences, won't lead to real harm, uh, actually requires pretty much a rethink of how it's being approached. And what we may be seeing is that the pace of this particular technology is really out stripping. How it can, how it should be properly consumed
Malcom:CTIO 1 O 1. Business Technology. Simplified and Shared Sponsored by Fairmont Recruitment, Hiring Technology Professionals Across the UK Europe Don't forget to subscribe!
Marie:You can't just take humans out of the loop systems. And again, um, I've just published on this because when you do, you lose the entire kind of context of the overall system and what it is that you're trying to work within. And even if you've got a basic model, it's still working within real life, society and an environment.
Jon:I'm wondering whether it suffers a little bit from. Uh, what digital transformation suffers from, I, it's a kind of a definition or a word that gets banded about, but actually just, you know, people are probably thinking of, uh, different, uh, meanings or different, uh, definitions. But before we get into any of that, uh, Marie, what's your, you know, what, what's drawn you to AI?
Marie:I started off, um, in humanities and I wanted to do math because I wanted to challenge at university, but I got told, you know, you can't do maths because you only gotta C at G.C.S.E.. So, um, never wanted to let that stop me. I said, Well, whatever, I went and did some managerial decision making in, in my business degree and then just moved to a maths degree.
Jon:Maria, I just wanted to say, um, and I'm, I'm pretty certain this is true. I heard it on the, uh, BBC show, The Infinite Monkey Cage. I dunno if you ever listened to that, but Professor Brian Cox, uh, he got a, a D I think in, in maths, uh, at A level. Um, and he described it, you know, as something it was, it was something he needed to unlock to understand physics. Um, but I just thought, you know, I'd mention that because it, it, to me, it seems like you're in pretty good company. Um, Albert Einstein, I think academically, early on, Uh, didn't have the, sort of the traditional markers as well. Um, you know, was famously a patent clerk, wasn't he, uh, before his great, uh, his great breakthrough. So Maria, I'm, I'm putting you on a bit of a platform there. I'm putting you up with, uh, Brian Cox and actually Albert Einstein. So I know it's only, uh, 11 minutes past nine in the morning. It might be a bit early for such greats, but, uh, I couldn't, I couldn't resist it, just the connections I was making. So, a apologi you were, you were saying about, you know, that it's grown so quickly. There's been such a rapid move into, give me some ai, uh, from, from the, the demand and the supply side that there's, you know, a lot of folks, uh, still trying to work out which ways up.
Marie:Um, I'm not a standard mathematician by any means. Um, I've done a master's and a degree in maths, but I've also got, um, a master's in philosophy and I've got my PhD in artificial intelligence. And the reason that I've kind of been drawn that way is. Throughout my entire career, I've kind of seen people doing modeling and building, um, these models, but it's kind of gone a bit strange because we've moved from statistics to, um, machine learning, data science and artificial intelligence really rapidly without kind of much definition between these fields and between these disciplines. And I've seen people kind of, you know, get lost in that and start coding things and people want to get jobs in it, but they're not sure what they need to code. They might need 10 million coding languages and 50 years experience. And what's happened is it's, it's kind of gone a little bit into a wild west and there's huge amounts of problems that are coming out of this in terms of societal impacts, loss of, you know, money, taxpayers, money, um, investment money, um, ethical funding is, is a concern at the minute. This is not an easy problem because, um, it stems from all the way from education to professional bodies not being involved to people trying to do what they can to get these jobs, but actually maybe they're not doing the right thing or they're not understanding the ethics or the modeling pipeline. Um, we've lost a lot of the old methodologies in that, and we've moved into kind of new, let me just code something and see if it works. And then we've got the rise of terms such as bias, trustworthiness, fairness, what do they mean? How do we help practitioners ethically model? Um, and so my kind of involvement in it has been, I've been looking at this for over a decade now. And it just seems that these problems aren't going away. So what I've been doing recently is more of a concentrated focus research program into this to see what we can actually do to mitigate these effects. And it's interesting what you say as well about, um, you know, Einstein not having, not being looked at as a, the traditional kind of mathematician in the first place, because we do tend to write off a lot of people and say, You know, you can't do this. You're not a, you can't be a specialist in this. I've found that if you want to put your mind to something, you can do it. As somebody that was told that I could never do math and has now got a PhD around that area, I think that if you really want to do it, you should go ahead and do what makes you happy.
Jon:Um, Marie, that's really inspirational. Um, I've, um, I've got my bucket list. Uh, I want to redo, uh, my Maths O level, then I wanna do an A level. Um, and, um, I probably need to have a, a bottle of whiskey before I decided to do anything more than that, cuz uh, I've done a lot of part-time. You know, I've, I've, I've done eight years study, uh, during my professional life. Marie, what, what is artificial intelligence?
Marie:So it depends who you are. If you are, uh, the salesperson for an SME or somebody that's trying to sell it, it's the most amazing thing that can think like a human. It can work like a human. Replace humans, it can make you more, you know, give you more productivity, optimize things. If you are, um, me, it's an algorithm and it's a bit of coding and it responds to certain, you know, prompts within a certain bounded set of coding. So if you, you can only tell an algorithm to do certain things and it's got to be coded, and every single thing that it does has got to be coded in depth. So you, you're not gonna spend forever, you've not got general intelligence. You're not gonna spend forever coding every single eventuality for this program. So AI can really only do certain things. And when we say things like AI can learn, AI can, you know, respond to certain things, we're using human terms that actually misleads into thinking that actually the AI is kind of like a human. And then we anthropomorphize it and we make it such that we can understand it. But in doing that, what we do is we forget that there's risks inherent in technology. There might be coding errors, there might be issues with the way this technology's been developed. And then that leads us away from looking at things like, you know, the boring objective stuff like audits and, um, examination of what's actually happening and testing the code my issue with it is why do we, why do we want to create robots to look like us and behave like us? I'm not sure what that gives us, because we've got huge amounts of people in the world, and we're alongside technology in the internet and social media. We're, we're kind of cutting down our real life interaction with actual people, but we're trying to invent robots to look like us and can behave like us so that we can interact with them. It really seems a little bit of a mismatch to me as to what we're actually trying to achieve in the future. And I think that's one of the biggest questions that I ask in general. We can do it of course, but why are we doing it? Should we be doing it? And I think that's a question that's, that's mostly, you know, irrelevant in the chase after profit.
Jon:The application of AI gives an impression of intelligence. And it certainly can be sold that way as well. So if you've got the salesperson talking about it and the person who's consuming it doesn't understand anything at all about how it's put together, you have got a bit of a snake oil, um, scenario emerging. A lot of folks use ai, uh, without realizing it as consumers. You know, so we consume a lot of product, uh, and services that use AI either directly, and then there's probably a much bigger subset that we, that, that use it indirectly. It's used in SEO and search engines. It's used so every, anyone who uses the internet is using it without realizing it. As a CIO, um, I've got to, I'm gonna be making decisions about consuming AI based services, which probably for many CIOs with. Budgets and the scarcity of engineering knowledge and just where AI is at the moment, that might be a more realistic, uh, AI strategy. And then you've got the part which says, Actually, I want to produce, I want to, I want to write that algorithm. I've, I think I've got my data sorted. I really understand my model modeling and all of those sorts of things, which is probably quite unrealistic. Certainly unrealistic if I was to say that, but so, so that balancing act between do I consume it and should I produce it? What are your thoughts on that? Do you, I mean, let's go down the consumer route. What, what, what are the sort of things I should be thinking about if I'm consuming services that I know have AI in, but I've just sort of almost had that as a, an extra tick box saying, Great, I can, I can tell the Board we're now using artificial intelligence.
Marie:This is, this is a really interesting one because as a user using, um, kind of any, any kind of coding based application or things like, you know, Google and Alexa and, and whatnot. it's really difficult to understand the risks inherent with them, because actually they're not gonna tell you upfront half the time how your data's used, what data they're taking and how they're taking it. Um, and the shadow data, which, um, they can, they kind of just absorb while, you know, the environment's going on in the background. If you've ever, ever had something where you've, you've looked up a pizza oven, I had a conversation about pizza, and then Amazon, you've logged on and it's tried to sell you a pizza oven, and all the adverts on Facebook are then about a pizza oven. That is how quickly our data's shared around the internet from just you doing general daily things. And it's got to a point where it's now, you know, where parents are reading bedtime, getting Alexa to read bedtime stories to their kids. So all of this data's kind of getting sucked into these devices and who's gonna go and look at the settings and change all the settings to something that's, that's more private. And especially because when you try and do that, sometimes it's actually blocks off your access to some of the more functional. Um, you know, capability that you'd actually like to use. So it's very difficult as a user to get that balance between, um, am I maintaining my privacy? Do I want to maintain my privacy? And if I want to do that, how do I do it? And that can also come from the developer side because actually we can make it a lot easier for people to understand that. Um, I mean, there's the recent, you know, cookies thing where you can reject all or accept all, Again, you've got to go through it and click everything. It takes ages. Who's gonna do that? So it needs to be made a little bit more transparent for the user. And, and also, uh, on an even high level than that, you've got decisions being made about you as a consumer by, um, the, the people that are collecting this data. So if you can't have, you know, the Apple credit card debacle, if you can't have a credit card, you can't have a loan, you can't get your benefits on time. How do, how do you know what decision got made? How do you know how it got made? Can you speak to somebody that can tell you how that got made? So actually things are becoming, Very opaque and you can't always get through to a person. And then it's, well, actually, how do I interact with these systems and understand what these systems are doing, um, and how they're affecting my life.
Jon:That's a really big one for the legal sector because there's a, there's a requirement, regulatory requirement, uh, to be able to articulate why you've made a particular decision. Um, so, um, so that's, that's, I mean, that's a quite specific use case within, within the legal sector, but one of the things you just said reminded me of, and this is before I'd really heard of the term AI in, in, you know, in the popular, uh, technology sense, but I'd heard the phrase Google bubble, um, and I, I demonstrated it with two colleagues of mine. They both looked up the word scope and one had telescopes and the other one had gun scopes. And it was really obvious because of the two, you know, the interests, the two individuals involved, why that had happened. But, um, Marie, with all the, the work that you've done and the academic research, you'll, you'll know the value of primary research. Um, and certainly it's something I really, really value. Um, and so as an antidote to the bubble, which is, which is really, uh, will, will hold you back on your primary research because you are, even within the subjects you want to do, it will contain you within a, People think they're searching the internet, they're not, they're really kind of in a bit of an echo chamber, uh, of, of, of what you know, what the algorithm thinks. Um, you are buy, not even necessarily what you're interested in. I mean, I'm being really super cynical here. I'm sure it's not quite that bad, but, uh, just to demonstrate.
Marie:of is
Jon:Okay. Okay. Okay. Well just, just, just for, for all the angry, um, uh, comments that we might get. So, so, um, so with that in mind, I think, you know, there are, there are some really practical means that you can do for primary research. You can use, um, you know, a clean, uh, browser session, you know, but I mean, people just won't be, my mum won't be thinking about this when she's going online. You know, this isn't something that is, you know, can be used, um, in a, in, in a, in a, in a big, um, uh, you know, consumer sense. But for the discerning CTIO, who might be thinking, you know, I want to, I just want to do my own research here, um, they do very well to start with a completely fresh, um, and actually maybe even think about subscribing to some of the research, uh, services. Don't worry, this isn't a sponsorship message, Um, just to be clear. Um, but, um, again, you've gotta be careful there as well. Um, because, um, some research services. Obtain their content by, um, searching on the internet. You know, So it's a, this Yeah, which is, which is interesting because, um, doesn't that start to talk about sort of information entropy on a really massive scale? Because if, if effectively what we're doing is recycling content on a global level without putting new content in, um, that's going to have some form of, of entropy. I can't believe I said the word entropy at 25 minutes past nine on a Tuesday morning. Um, that was clearly, I think that coffee I just had was, was very strong. Um,
Marie:big.
Jon:It was. Yeah, really. Let's, Marie, let's get, let's just do the massive top topic. Let's just do the, you know, we can do any topic we like. Um, so, so that was on the consumer side. So there's a, there's a big kind of buyer beware, warning I suppose, Marie, that you've, that you've, that you've advised there. What about on the pro production side, because, um, one of the themes coming through with a lot of my guests is a, um, well there's, at one end there's, don't write anything anymore, John. Cause it's all been written. So reuse. So that's a, that's kind of, that's not quite entropy, but it's a deliberate, you know, don't waste your time. Another theme is, you know, focus on what you wanna be famous for and get that right. And then if you do have to crack open the code, uh, uh, then that, that, that's actually valid because it doesn't exist anywhere else. But then metering, that is actually John, all the best engineers. Uh, you know, the ones that you would need, not the best engineers, but the engineers you would need to do that kind of task are already taken, They're already working for Tesla and Google. There's a global shortage of, of software engineering, et cetera, Cetera. So, um, with AI being such a, a, you know, in this industrial Revolution phase Yeah, phase one, um, and, um, and, and all of that, what are the sort of warnings, uh, if you, if if I was going to embark on producing, so making, you know, making my own algorithm, what would you say to me, Marie? What would, what would be our, kind of our session where you'd say at the end of it, actually Jon? Yeah. I think you've got a half a chance. Or you might say, Actually John, you, you really need to stop and, and rethink your approach.
Marie:Well, big topic. I've just written a paper on this. It's just been published, um, because it's huge. And so when you want to start, um, designing a model of any type, and that includes kind of algorithms as well, you need to have some sort of a pipeline to work with, um, doesn't really exist. Secondary education and tertiary education in the UK does not prepare you for ethical modeling. Um, then you've got professional bodies that don't really do much training on ethical modeling. So at the minute, what I'm doing is, um, off the back of my paper, I'm trying to look at professional accreditation and, um, software to help people design ethical ai. Because what you've got the minute is, Boards that will come in and they'll reverse engineer all your ethics and tell you what you've done wrong, which is great. But do you not want to know what you should be doing at the start? And at the start, there's a huge area of conceptual understanding and it's involves numerous specialists and interdisciplinary working. So you need, you know, sociologists, how will this impact people, psychologists, How, how will this impact people? Um, you've got things like, uh, statisticians. How do you collect the data? Because I did some empirical research on this and I've got a paper coming out again, um, in a little while about it. And practitioners really don't know how to collect data. People that are developing, um, algorithms at the minute, and working in companies, large companies, small companies, collecting the data is a problem because they were never taught statistics properly because they did, you know, computer science degrees and, and they never did the, the depth of methodology that you need. Um, then looking at how you want to model something and how the model's gonna process the data. again, that's something else that gets skipped over. There's a whole part at the front end that just gets skipped and then what happens is people jump straight into phase two, which is let's write some code and see what that does. And then you write it and then it gets reverse engineered, reverse engineer tested, looked at this is not right, what's wrong? I've spent, um, a lot of time taking models from high level consultancy's and looking into the coding that they've done and some of the issues come down to they've made one assumption about the user's requirements that is just inherently wrong and it's one line of coding, it's messy and time model up, but it's because they weren't engaging with the stakeholders and the users, so they just didn't. They just didn't code it. Right? And coding is quite difficult because you're trying to translate real life requirements into objective, basic language that a computer can understand and then use it correctly, um, to process all this huge amount of data because we've got big data these days, and to process it correctly to get the required outcome. Um, so all of that works together. It's not like you can skip part of that and then there's a huge testing section at the end and then an audit where you need to look at, have you, um, considered ethical considerations, but also does your model do what you said it was gonna do? Does it do it correctly? Does it do it ethically? Again, that audit is skipped over. So what you've got is these kind of lumps of code coming out, being sold. Here you go use that. And then what we're finding is actually they're causing huge amounts of damage because that they're not, the, the conceptual part up front is, is just not done correctly. The thinking through. So I'm trying to work on that the minute to try and put a framework around it to help practitioners in particular to be able to model better, but in a more ethical way, and to underpin their understanding and professionalism on what it is that they actually should be doing. Because the overwhelming kind of answer that I've had from hundreds of practitioners of all levels is that they're just not entirely sure what they're doing. And as long as the PR is good, they're fine with that. Because again, who's gonna go and look like, look around 500 working groups and all this legislation to figure out what it is they need to do.
Jon:Why don't we talk a little bit about quantitative research, which is back in the day, uh, I mean it's still highly relevant now, but it, but I think, um, quantitative research, some of the quantitative research, research, um, techniques, particularly with, with modeling, very useful for data, but also have sort of been rebadged as ai. Uh, you know, you know, so this is a sort of, that's, you know, you were talking about how fast the, everyone's moving to use something to do with ai. It's just getting this blurring of, of, of previous, um, approaches and also on statistics. Again, someone can pick me up if I've got this wrong, but I, I understood that there is, um, that there, there is a kind of, not everyone who's naturally attuned to mathematics can easily cross over into statistics. There's a, when you're doing stats, there's a sort of a slightly different, um, you know, sideways look. This, it's a bit of a, kind of a Marmite topic amongst mathematicians as I understand it. I've got, um, the mathematicians in the Grange family are, are, um, doing physics, um, at degree level and they do maths that I just makes my toes curl. But, um, I understood that. That's, that's a thing. So, so, so the reason why I wanted to go into quantitative research is because the importance of the model. And you know, one of the things that um, that, that I say for anything to do with Agile, anything, anything at all to do with Agile is I can almost wave a stick in the air. And that whatever project I touch, I, I'll be able to say they started software development way too early. Um, you know, they created the database, they started, you know, and, and the thing is, is that the people think under, I know I'm covering a lot here, but it is sort of all converges. The thing with Agile is they talk about fail fast, but fail fast isn't, let's just get started and fail because you can end up right down a deep rabbit hole with that. You've got to have design designs really important. So, Lots said, let's start with the model and the Goldilocks, um, thing about a model, which is. If your model's too complicated, it's unwielding you'll never get, you just can't use it, can't feed it. But if it's too simple, it's not really a model, it's, it's a spreadsheet, you know, or whatever. But, you know, there's that sort of balancing act. And there is the, forgive me for the length of, I'll change the quote, It'd be really obvious, but forgive me for the complexity of this model. Had I had longer to work on it, it'd be a lot simpler. Yeah. You know, that, that kind of quote.
Marie:about it, it might be a lot simpler
Jon:Yeah. There we
Marie:thought it front. Yeah.
Jon:Yeah. You don't get folks, you know, on LinkedIn saying, I found a great data modeling expert. It, it just goes straight into the kind of cliched lingo of AI and data. Um, and you don't hear folks talking about that, but it is so, so important. And then, And then Marie, I suppose the other thing is, is then you've got this great model, but then we find out it isn't actually anchored in the business technology. You know, so you don't have the business technology connection to know that actually this is just an extremely important part of operations, and that's why the model, So there's a really long chain. It's a, it's a very, very ambitious endeavor, uh, to put, um, you know, this kind of technology into a business, isn't it?
Marie:I've seen some amazingly complex models done on Excel. Um, I, I think if you want to make a model, you should be able to make some sort of a prototype that's actually quite simple because if you can understand in your own head what it is that you need to do and you understand the steps and you can understand it clearly what you're trying to achieve, it shouldn't be too difficult to write it down on a large piece of paper. I mean, that's what I do with my models. Um, And, and there are hugely complex models, you know, with, with modules and modules of coding. But I've broken those down again and, and made them extremely clear for the stakeholders and the users and made sure that the model did what he said it was gonna do. And, and nine times outta 10, you know, you get to phase two and the data collected is, is not correct, so you need to go back and collect more data or you've not figured out what, you know, what data you need or the, there's something, you know, you've not cleaned the data correctly. And really all of this is basic statistics and, and it's like you say, you know, you get to data science some machine learning. A lot of it is based just on statistics. Yeah. There's a coding element to it. There's a software development element to it now, um, which there maybe wasn't before. It was spreadsheets and, and kind of using SPSS or modeling such as that. But that's not to say that we can't do it, it's just that we need to bring the disciplines together and, and the disciplines diverging is, is not really helping us to be able to create fit for purpose modeling.
Jon:Then you get, you know, like, um, what's the deep learning comes out as a phrase and, and, and, and as I understand it, so just to be clear, and it's probably obvious, I, I've not implemented deep learning. Um, but, um, that is, uh, using a really interesting Marie, it's back to the sort of image of humans, but it's taking a neural network. So a neural network, I believe is modeled on how they think the brain works and, and, and being able to recognize increasingly complex patterns. You know, that's, that's, that's kind of what they say. Deep learning is sort of broadly based upon. Um, but
Marie:a lovely sales word
Jon:it is, isn't it?
Marie:learning. It sounds so good.
Jon:Yeah, it's like, well, yes, uh, because I always say with, when you look at a business technology strategy, if there's anything in it, and if you were to repeat it, but say the opposite, and the opposite was really obviously something you wouldn't want do, it's not really a strategy. So when someone says, We wanna be number one in Europe, you say, Well, the opposite of that would be we wanna be last in Europe. So it's not really, you know, we notice that it's obviously you want to do it, but it's not really strategy. It's, it's too obvious. Uh, and so deep learning, the opposite of that would be shallow learning, wouldn't it? Or not learning.
Marie:Well, you could just say, I'm going to use coding to build a model or an algorithm. These are the same things, but they don't sound as fun now to
Jon:don't. And there's, there's billions and billions of pounds, um, in the, in the software market and it, you know, it's all part of, part of the world we live in. Um, but uh, okay, so on the data side of it, so this is a classic as well, and I think you can go back to, uh, data warehousing days. Uh, Data Lakes, I think, um, Data Lake House, I think I've heard any road, massive amounts of collection of data. Um, and then, you know, a lot, a, a proportion of those never really get going. Um, and in the same way that maybe folks are just sort of saying, Oh, let's just, let's just create the model. Put in a silly assumption, reverse engineer, you know, it's a black box. You need to have a, a degree in XYZ to understand what we're doing in any case, and we'll just move on. Um, uh, I'm not saying everyone does that, but that, that, that's a sort of a, a dark thought. Um, in the same way, you know, these huge, huge technology investments in data storage, you know, and when they first came out, you know, this was I think on premise before cloud. So they were eye wateringly expensive and they kind of did the, you know, will. Build it and people will come to it. And then of course, you know, they just very, very quietly have sort of gone into the background. And unfortunately because of the investments, um, they're off the back of 10 year business cases. So you put one of those in, you really, you know, you've got a, that's not a great legacy. Um, so on the, on the data collection, um, this is a sort of a, if you don't get this right, I think it goes under the, the heading of herding cats. You know, so you can spend a lot of money on your, um, let's say in quotes, your AI team, you know, putting together the model and, and the, the algorithm. But, um, if they are being served by a separate data team or if the data area isn't really being covered, we're onto another sort of challenge, aren't we? I mean, that's not gonna end well as that.
Marie:No, and I, and I think people maybe don't realize how cohesive a modeling team needs to be. I mean, the teams that I put together, they, they work together on the model. There is no kind of separate team because there can't be, because you need to have data collection that works and then you need to test that in the model and make sure the model's doing what it's supposed to be doing. When we've looked at the pipelines before, you just can't cut them off. You can't just say, Okay, we'll have a statistician collect the data. Then we'll get a software developer code it, and then we'll just run the data through and it'll work because that understanding of the data. From a statistician's point of view goes through into the model and the output, and then interpreting the output and verifying it. The software developer has to then understand some part of that statistics to be able to build the model and then be able to test it and to know what tests to do to make that, that model work correctly. So if you're not using the correct experts, I mean, it's like when I speak to urban geographies and they're saying, Well, people are using AI for urban planning, but we're not involved in that. We're told that we're not technical enough. So where is the, the subject matter expertise to be able to build that model correctly? It's not there. So then you've got models that are unfit for purpose
Jon:When you say unfit for purpose, um, I always think that if you see something that is in quotes, unfit for purpose, you haven't decoded what, what their purpose is.
Marie:no.
Jon:Now. Now, I completely agree, unfit for, you know, the stated purpose, but in, in one of the, sort of the, um, rolled out, um, business, um, strategy approaches. Since the beginning of time is to be the aggregator, you know, you have to be the, in other words, you are the point where everyone has to come to for any given market. And I think the example they used, um, depending on, yeah, this should work, um, you know, sort of Elvis, the Elvis movies just come out, uh, and there was a guy called the Colonel, uh, who had all of Elvis's publishing rights, you know, in a single contract. So basically he was the aggregator. Um, you now have, you know, Spotify trying, I'm not sort of drawing lines between Spotify and the kernel, but the point is Spotify is a platform you go to. So the reason why I'm. Saying this, Marie, is, whilst it might not be fit for the, the purpose that you would say, Well, actually this is what you should be doing for urban planning. Um, the, the aggregation, uh, element might be, well, we could make a lot of money out of this, selling it to real estates. And do you see what I mean? Because they're coming at it from a private sector investment perspective, which there's nothing wrong with, which is you invest, you've gotta get a return, so what's our return? And quite often the return isn't just, well, we'll get lots of time and material revenue from people that we work with. It's much more about, well, hang on. If we could make a product, then, you know, we've got ourselves that passive income. Uh, uh, that's another massive topic, by the way. I don't wanna go far, far off, but do you see what I mean? So I think Marie, maybe, maybe if we could understand
Marie:but your
Jon:their purpose
Marie:can actually also be reputational risk, um, fines and substantial impact on the end user as well. So it's, it's not all positive. It can be a complete double edged sword. And if it does come down to these negative issues, not only have you got a product that doesn't work, but you've seriously damaged your company. And I've seen that happen as well, because the modeling has just not been right.
Jon:That's, that's a really good point, Marie. And that means that, um, that agency or or company that you use to create the model, uh, may have also insulated themselves with a really clever engagement letter. That means that there's no comeback. Um, uh, so, uh, yeah. So this is a huge buyer beware, isn't it?
Marie:The users suffer a lot and the users are generally not the richest in society that are using this stuff, and they rely on it to, you know, make their lives work.
Jon:The other point, um, to make around sort of the, um, the algorithmic, uh, element of this nature of this is, um, So, so when, when I'm running something in a, in a data center or on the cloud, um, I'm, I'm hopefully monitoring a load of threads, performance, you know, real sort of technical stuff. It's sort of the, the sort of logical equivalent of watching the lights blink on in the data center. Um, now by looking at those lights, I can't actually tell you what it's doing, but I can tell you it's up, you know, it's running that sort of thing. When, when you switch on an algorithm, you, you have this sort of bad robot risk, which is all the lights are on in flickering, but actually what it's doing is wrong. And there's no monitoring software that I'm aware of that can tell you. John, this model is making the wrong or some, you know, do you see what I mean? It's operating at a level that's not really detectable by traditional means. Um, and so that's the bit that could make you have a little bit of troubled sleep by thinking, you know, 10 million transactions going on every day. If there is an issue with the algorithm, that's, that's, that's a, you know, this could get really bad very quickly. Someone will notice, you know, spot it and then we'll have to go back. So I think you were talking about testing an audit before, You know, that's, that, Is that in, that is, is that in that space, Is that the whole point that you don't let this thing run away?
Marie:Yeah. I, I've seen this and I've seen this potentially cost people's lives as well because of the decisions that were made off the back of these models. I've, I've seen, um, just. Generic rubbish data produced at the end of a model. And the people that were running it didn't really understand the model and said, This looks great, these numbers are fine. Um, within what they're bounded understanding was that the numbers were completely wrong, so wrong. And it's this, these kind of models, because I work on anything from, you know, things that actually do immediately impact lives to, you know, finance and all that kind of thing. And yeah, this, this was potentially costing people's lives. And this is the, this is a real problem with modeling. The people need to understand who are working on it, what the model actually does, how it works, because I'm not sure how somebody had managed to kind of mess it out with the backend so much for it to be producing this data, but this data was, was just incomprehensibly bad. But it takes a trained person that understands the model to come in and say, that's not working correctly. Because if you want to get an algorithm to. Kind of check an algorithm, it needs to understand what the algorithm's doing. And I also understand in a way that again, you can only code it within the bounds that it can be coded into. You can't just take humans out of the loop systems. And again, um, I've just published on this because when you do, you lose the entire kind of context of the overall system and what it is that you're trying to work within. And even if you've got a basic model, it's still working within real life, society and an environment. And you need to understand how is it doing that? How is it making the decisions that's making based on the data that you're putting in? And is the data correct? Is the model correct? And the trained person has got the kind of intelligence and experience to understand that. So I'm not an advocate of just taking people away from algorithms, but I am an advocate of. if there isn't a transparent kind of set of paperwork behind that model and you pass it onto somebody else because the other person's left, how does that person, the new one, then know what's actually going on within the model? And if you've got things that are really, really, you know, impactful in terms of lives and money, you need to be taking every single precaution that you can to make sure that that model is working correctly at all times. And it's like you say, a lot of the time it's a question of just turning it on and watching it go and going, Well that some output that looks great to me. You know, have you tested it recently? Did you change the code? Is the data still relevant? This kind of upkeep is kind of left by the wayside once it's implemented sometimes.
Malcom:This Episode is sponsored by Fairmont Recruitment, Hiring Technology Professionals Across the UK Europe
Jon:Maria, I, um, I used to, um, I used to do quite a lot of sort of outdoor pursuits, uh, and uh, there's a time where I was sort of quite a long way down doing the, uh, walking group leader. It's got a different name now, but, um, I thought I knew everything about navigation. And then you get out on the North York Moores or actually. Yeah, the North York Moors for this example, and, um, to simulate, um, well, the first thing they do is during the day they say, John, just show us where you are on the map. And you can see a radio mart and a dam and a, you know, and you just triangulate. And you go, I think we're here. And he goes, That's great. He goes, Now, Now imagine you can't see beyond your feet. Now, where are we? And it's like, well, you know, I'd never done that kind of, Now it's called micro navigation. Don't worry, Maria. I'll get to the point. But, um, so just bear with me. But so it's called micro navigation. So in that micro navigation, you feel with your feet that you're on a slope, you maybe you can hear a river, you might get to a wall of some sort. And new Yorkshire, we've got lots drystone walls. So, and then you do something called hand railing. You follow it until you get to a junction. You, you see what I mean? You're sort of, it's a bit more visceral. But the point is, uh, and that's in the fog. Um, and, and it is pretty risky because if you don't get it right and you're walking in the fog, you can walk into trouble, uh, or walk off something very high and then into trouble. And the reason why I'm saying that is because I very honestly, very innocently didn't realize that, you know, lives, um, could depend on, or, or, or do, you know, depend on this kind of technology. But my analogy is it's a bit like me meeting someone in the fog. They very confidently say, It's over here. And I just go, Oh, great. Cause everyone wants that leadership. When you're in the fog, you don't know where you are, you're a bit worried, um, you know, time pressure, et cetera. And you just follow them and they just walk off a cliff. Um, and then, you know, your thoughts are, um, or maybe I should have just checked out who they were. Maybe we could have compared our compasses. Anyone with navigation experience will know if you put two compasses together, they affect each other. So I, I, I was just meant that sort of very loosely. Um, so, so it is, it's uh, it's incredible. It's the business equivalent, isn't it, of meeting someone seems very confident. All the rest of it. Let's put all of our trust in them. Um, and, um, no, I didn't even know there was a model, you know, or I didn't even know there was a compass. You know, I, I just followed them.
Marie:I think a lot of that comes down to mindset as well, where it's just a case of I want to get the accolade, I want to be the person in charge of this model. I want to get the funding. I want to be, you know, the big ego around the office or whatever. And unfortunately that kind of mindset, you know, we, we can talk about the technical development. Um, For, for ages. But a lot of it does come down to human mindset and the fact that communication, leadership, um, just even basic questioning of your own understanding, a lot of people just don't want to do it because they don't want to appear to be, um, incompetent or less than what they think they are professionally. And the point is that if, if you're going to be, I think you always need to question yourself and you need to understand your limitations and you need to work on those. And that's about being self-aware. And it's about having that mindset to be able to not only build things that are transparent, but work with other people correctly. And this whole kind of, we've seen it recently in the government where modelers have just been completely, um, discarded and decisions have been made without the evidence basis and the modelers have just not being respected. And I think that that's just is really terrible because, um, Where you are trying to make decisions that impact a huge amount of people, the evidence should be taken on board. And if the modeling's not correct and the modelers are telling you it's not correct, and that there's challenges and there's safety issues, they should be taken on board. And it is kind of this whole hierarchical, you know, let's do an evidence basis, but only when it suits us. Or let's make a model, but let's just make what we want to make and then try and sell it. I mean, the amount of times I've, I've asked people about their facial recognition technology at Expose, and I say, How does this work? And they're like, Well, we don't know. Why am I gonna buy it then? You know? And a lot of it really just does come down to the way that humans are interacting with each other and how they perceive themselves and others. And I think that's a real shame.
Jon:Yes. And, uh, and Marie on the, uh, we, we, we dunno how it works or we just consume it. I mean, we, we were talking, this is a theme that comes up on CT 1 0 1, which is the use of APIs. Uh, and, uh, I'm, I'm hoping to do a, an episode as, uh, on, on API's, um, soon. But the, the issue with an API is it really is, it's, it is deliberately, you send a message, you get something back, and you have no idea. There's, is sort of a Wizard of Oz, uh, effect going on, you know, behind the curtain. Is it just someone typing a response? I mean, obviously it's, it, Well, I say obviously not now. I'm getting really paranoid. Um, but, um, but, but that is, that's a big part of. Of technology we use. I also, there's an episode we did on open source. Um, and you know, the reality is pretty much every software we, every better software we use has open source in it, and that also has a problem. So this is sort of a problem. And anticedence, I think everything we've discussed Marie so far is leading us to ethical considerations. There's, there's, there's an awful lot about what you've just said. There's, there's, there's gonna be some specific ethical considerations in AI itself, but there's also ethical considerations about just the use of technology, uh, and, and the ethical use of technology. Not, not just in the do no harm to society, but also in how you convey your services and how you are transparent and, you know, if you are selling. A service being, uh, meter and realistic about what you can and can't do that, that, that, that sort of thing. Um, so I thought maybe we could, we could, uh, get into the ethical consideration, but I thought I'd throw something in to maybe, you know, get the, get the conversation going. So I've got this, um, business strategy I, I put together, which is called Work Anywhere, Automate Everything, Create, um, and, um, folks who listen to the channel have heard me say this a few times, so I'm only gonna just zone in on, on this for our discussion. I said automate everything almost like a moonshot. It was almost like a, when we look at something, our first thought should be, can we automate it? It. Meant to be a literal automate af Absolutely everything. Okay. That's the first point. And then the word create after it was meant to be a nod to creative automation rather than destructive. So we're automating tasks that as humans we don't enjoy or we're not very good at. But the idea is that that would create space. So that's the create to do more creative things. So, you know, it's very, very difficult to get three phrases to encompass an entire business technology strategy, but that's where I was coming from. So I just thought we could talk about the ethics of automation and ai, the ethics of ai. So, um, I suppose we've got two broad strands of ethical. You've done way more than this, Marie. So I'm just, just doing this real time. My, you know, my real time. Two broad strands would be the ethics of providing AI services. You know, that's the, being honest about the model and all the rest of it. And then there is that, uh, actually, oh my goodness, we've created AI that works. Let's just assume we have, but it's what it's doing. Is that ethical? Do you see, You've got, almost got like two you've created. Yeah. I'll make a gun for you. Uh, but what I'm not gonna tell you is the last customer that I made a gun for that it exploded when they fired it because we haven't quite worked out how, uh, it's not great analogy. And then there's the actual act of, well, we've also created a gun that you can kill people with. So there's, it's, there's a lot of, there's a lot in there, isn't there? From an ethical perspective, where, where do you start?
Marie:Wow. Um, so there's, there's a, there's a difference in levels of, of the kind of things that you're talking about. So where you are talking, Automate so that you can create, I think we all do that. I mean, if I use things like, um, Otter AI and I can get all the notes in my meetings just immediately transcribed and it doesn't need to be perfect, but it's gonna be, you know, a reasonable record of what I've done that gives me an hour where I can then do creative things like social media or blogs or something like that. So there's, there's a kind of difference there when you start to scale up to, um, I dunno, I'm gonna do a defense model and that model's gonna run drones in a warfare situation. Does it work well? Can you conceive of every eventuality and can you program that in? There's, there's a big jump between am I doing this for my kind of day to day life? I mean, Otter AI may well take all my notes and use them like Apple uses the dictation data to improve itself. I mean, there's a lot of this kind of language of, you know, let's do this to keep you safe or improve yourself. But actually, um, Is that what it's being used for? I mean, a lot of social media, kind of behavioral prompting is done at the minute by different, um, countries and different bodies, and that's to keep us safe from certain things. But is it really, or is it kind of manipulating us into a way that they want us to be? So again, it kind of, you know, even the basic minute taking data can potentially be used to create a profile on you. And that profile can then be used to either try and sell you something, prompt you to do something, change your behavior. Um, but I, I do like the, the automate to create because I think if you can, you know, and I mean Apple does some basic automation stuff. If you can automate certain basic task, it does give you the leeway to be able to go and do something more creative. But you just need to understand the technology that you're using to automate whatever it is that you're doing. How is it interacting with you? Is it taking your data? Is it prompting you to do other things? Is it selling your data? And if you can be clear on that, you can understand, and I mean, Do people care about whether their profile gets taken off them and a data profile made of them? Probably, it's not on the top of their to-do list today, but in the future when all these databases get linked together and you've got criminal profiling and you've got kind of, you're being tracked everywhere. At what point do you then start to say, Oh, actually I wish I'd not done that.
Jon:We've got that future police scenario where I, I step out, uh, front door and I get, I get three points on my license for speeding before I've got my car because of the time I've left to get to a meeting. Uh, you know
Marie:Oh, I never thought about that.
Jon:future crime. Yeah. Just made that up. I hope that doesn't become a
Marie:no, that makes sense. Actually. That could be a thing, because Apple can tell the journey time and your speed would be,
Jon:equals distance over time. It's a classic. Yeah. I'm sorry, Jon. The only way you'll get to your meet. And then my, my, my fantastic defense lawyer would say, Of course you haven't looked into John's background. He's always late
Marie:exactly.
Jon:I was gonna say our Amazon Echo, we've got two, uh, one's in the, um, kitchen and, uh, I'm pretty convinced Amazon know, uh, all the food I cook because I use it for timers. Um, and, um, I mean, Marie, do you have one? Are you, are you, or do you, when you get home, do you put a tin foil hat on and uh, switch off wifi and,
Marie:I'm, you know,
Jon:you live in, you know, a fairday cage and all the rest of it?
Marie:I'm a little bit, I, I don't have these devices because I, I'm, I'm never convinced that they're not taking the data. I don't, Plus I don't use them. I mean, when I've used somebody else's Amazon Echo, it says, You need to pay for this functionality and this functionality. And I'm like, Do I really need it? If I want the radio and I can put it on my laptop? And, and I mean, I love to shut down my technology and leave it behind because I just find it becomes, and, and I'm not sure about kids today with social media, but it just becomes overly addictive
Jon:It is too
Marie:Yeah, I just like to put it down and go outside and meet people. Cause I'm just too old fashioned that way.
Jon:So I found it's really interesting when, uh, when I first had, um, that device, other devices are available, folks, it's not sponsored, uh, by, by those guys. Um, I was very polite, you know, you're talking about making machines in our own image.
Marie:Hmm.
Jon:And then I've went through a period of, of, of not saying, please, But then I felt like I was dehumanizing myself. I mean, I'm, I think, you know, this is a man who's ultimately got to know when to take the, uh, food out of the oven. So I was under time pressure, but I was having these great thoughts. Um, and uh, the, uh, I, I say to the kids, uh, so we have my Spotify linked to it. Yeah. But unfortunately it's the kitchen one. So, um, the kids love to, cuz they're getting clever, They love to put something onto Amazon, sorry, onto Spotify that will then mess up my algorithm. So I start getting all these weird recommendations going, Where's this come from? Since when have I been interested in 1940s musicals? Uh, but uh, yeah, so, um, there's all sorts of opportunity and then there's obviously. Hal know, from Space Odyssey, uh, who, um, if anyone's not seen that, it's Space Odyssey 2001. Um, fantastic movie, um, for back in the day. Um, and actually, Mary, I was thinking not people don't watch as many movies. If you think about the back catalog of movies now, I mean, it's just, you know, thousands and thousands and thousands of movies. But, um, people don't want, don't seem to want that kind of time to even invest an hour into getting something back from it. They want more instant gratification that I've noticed that just the movies, or at least the ones that the algorithm are showing me, uh, are very formulaic. Lots of special effects. And, um, blue screen or green screen, actually, Marie, uh, we'll see what we can do. I'll see what the algorithm can do with your background Um, so yeah, it's, it is, it is really challenging. You've got the utility of that device versus. What's, Have you got any kind of automation? Uh, ai We've got, um, we've got little Hoovers.
Marie:Oh no.
Jon:they look like discs
Marie:Yeah,
Jon:and, um, so again, you know, with the humanization originally we called them bits and bobs, and I've recently renamed them upstairs and downstairs. I'll feel a bit bad about
Marie:but it's more, it, it, it makes more sense because otherwise you've gotten inappropriate emotional attachment to a piece of technology that's designed to make you feel that way about it, so that you start to say, Please, to.
Jon:Yes. Yeah. To feel like this is turning into a therapy session, Marie, For my,
Marie:honestly, it does, it's, it's one of these things where it's like, how do you feel about the technology? You know, they're designed to make you interact with them like that, but how, you know, there's a huge amount of abuse towards technology as well. When it doesn't actually do what you think it's gonna do, you've been sold a dream and it doesn't work that way. So how attached do you wanna get to it?
Jon:No, you're right. But I think it's sort of innocently, or I don't know how it sort of crept in, but I remember, um, in the nineties, people starting to talk of the out of the box experience
Marie:Hmm.
Jon:now, um, that they didn't mean like, you know, outta the box, uh, thinking they meant when you open a product, you know, all these, you know, you can now, and now what you get is the classic. You get a, you buy something and there's this sort of a, it's almost like a sort of a Japanese puzzle to open it and it all slides open lovely. And then the product inside is just awful. It's like the box is, you know, that must have cost 30% of the cost, um, to, uh, to see that. So, um,
Marie:And the box is the best bit because it's with an imagination. What can't you do with a box?
Jon:Marie, I like it. I like it. Marie, there's a, there's another theme here. So I had a C cio, CTO say that part of, um, her job, um, as the lead technologist in the company was to stop the company buying technology. That was Gillian Powers. And then we had Ben recently saying, you know, hit part of his job as, as, as the lead is to actually stop engineers creating things as, you know, doing, you know, doing certain types of work. So it's this metering. So it's very interesting, Marie, that you've, you, you're in a, a role where, you know, your career is absolutely right in the, the, the, the storm, the eye of the storm of technology. Especially, you know, with ai, what it could do, what it's doing at the moment. But actually for someone who knows more than most about how it all works, you are choosing to have a fairly meter exposure to it in your non-professional life. Is that right?
Marie:Yeah. And, and I think that's because, um, there's been a lot of research come out recently about things like dehumanization. and there's a lot of effects that are being seen in terms of human belief systems and, and what you said about actually people know watching movies is it's all kind of linked into that, the kind of, um, attention, you know, span is less than it was, and people want instant gratification. That's all being driven by companies that know that they can make more money if they do certain things. And, you know, if you have, I downloaded TikTok for five minutes just to see what it was like. That is addictive. You cannot, it, it's built in such a way that you cannot just stop watching it because it just keeps on feeding you what you want to see. So I think, you know, when you're outside and you're speaking to people and you're in libraries, it's a completely different experience. If you're sat on Twitter or YouTube, you're put into this echo chamber that's designed exactly for you, which is what I would imagine the Metaverse is gonna be. And you're just gonna sit there and just be given everything that you've ever wanted. There's no effort required. You don't have to do anything. You just get given.
Jon:So
Marie:so to me that's not satisfying. I like to go out and work and find out things and speak to people rather than kind of objects that are designed to manipulate me or exploit me in one way or another.
Jon:What do you think about the Metaverse?
Marie:I, so this is an interesting one because it's really philosophical as well as technological, because if you think about it, you've got this world that's built for you, and it's kind of like I, I liken it to the series Upload on Netflix because you've got this, this world that's built for you and it's great, and you go into it and then all of a sudden it's like, Well, you need to pay more for this desk. You need to pay more for this view. And then all of a sudden you're working. In the metaverse, but you're having to pay for all this stuff. Then you've not got enough storage, and then maybe you die and then maybe, you know, your profile gets uploaded and you can have, you can be kept somewhere in the metaverse, but you have to pay for it. And if you want the higher storage or the less, you know, granularity of the personality. And it just seems to me to be a huge exploitation of people and, and all I see is just people just like in the Matrix, just sat there in this metaverse all day long working in the metaverse and being kept alive to just keep on fuel in this metaverse.
Jon:Yeah. I, I agree. Let's go. I'm just gonna go for it. Okay. I, I, I think it's, um, it's like, uh, having a, a falling asleep, having a wonderful dream, and then just as you're about to fly, you know, they say it's one of the best things you can do in a dream. You know, it stops and you have to put 50 P in the meter. It's like, what, what, Hang on a minute. You know, I'm, this is a dream I'm having and, you know, we were talking about the, uh, fit for purpose and not understanding the true purpose. Well, if you could sell goods and services and real estate and desks that didn't exist, and were unlimited in supply, well that's, that's quite the business model. Um, I, uh, I also, um, I, I did some listening to some podcasts on meta. You know, it was, I was a few months ago wasn't it was just everyone was talking about it, so I thought, Oh, go on, I'll have a listen, see what this is all about. But this, the sort of, the elephant in the room was just how people behave when they get into the metaverse. It's just kind of like, you know, those other, uh, anonymized encounters and they were saying, you know, um, women were being sort of, you know, um, getting a lot of unwanted attention, people swearing, you know, just the sort of really base behavior because folks weren't, were, were anonymous. Um, and, um, I think, um, the other thing about a dream is, you know, when you're dreaming, you are processing, it's, you know, there's a lot of outside influences. But there is only all the outside influences from that moment you went to sleep, and then you wake up and you get more outside influences and you dream again. This, this metaverse, this echo chamber piece, this entropy of being surrounded by a, a, an environment that only you want means it's going to, I mean, how on earth do you discover something and you don't discover it, do you? Because the algorithm will place it in front of you. So I'm, I'm, I'm afraid. I, I, I really struggle
Marie:I can't think of anything worse than being in a metaverse. I mean, real life in social contracts, constructs are what I enjoy. I mean, this whole kind of breakdown of social constructs and. Human, human interaction, valuing humans. This, this all kind of, I'll just sit behind the keyboard and I can type all this, you know, nonsense what I want to type, and also be fed information that may or may not be wrong. You're just gonna get fed it. And there's no way of really changing that. Whereas actually, if you go out into the world and you speak to people, you get a much more balanced view of what's going on and you can kind of weigh it up. And, and this all comes back to, you know, critical thinking. Where is that taught these days? Um, how do we learn it and how do we then evaluate and analyze things for ourselves? Well, you don't have to because it's all done for you these days. You get told what to think, you get told what to you get told how to behave. Um, so what's the point in being human? The whole point of being human is having the freedom and the choice to be able to exist and to make mistakes.
Jon:I agree. And I think also in the, sort of the metaverse context, your conversational, uh, abilities to interact with other humans, even if they are humans. Let's get really super paranoid here. Ai, non-player characters. I think they're, they're, they're called in, uh, in, in games. But you've got this, uh, when, when I was doing my mba, I did a six month module on social media. And, uh, it's a bit like this session. We, we, we spoke a lot about actually the psychology. Of social media. Whereas here we've got, you know, there's, there's obviously been some really important, um, structural stuff to talk about in terms of how, how the technology's put together. But actually it's much more about sort of ethical use of, of, of the, uh, of the system and what, um, the, the professor, the professor of psychology, um, was, was talking about was that, um, social media conversations escalate really quickly because there's no means of, you know, the discussion we're having right now. I think if we'd done it through, you know, someone else would've chipped in, we would've had to have gone down. You know, it's very, very difficult to communicate like that.
Marie:And we probably would've argued because actually you can't express yourself in a tweet. And so it's easy to take it the wrong way. And then it's like, Oh, I can't believe you're saying that. And, and then that's how they get you to stay on the platform by arguing and promoting aggression.
Jon:yes. Absolutely. So, so I think, you know, immersing yourself in that, um, and, and, and you. You can't think critically in that environment because you're just trying to reel off the next, That's why I find, um, you know, sometimes I see a comment, maybe it's in LinkedIn or something like that, and I think, Oh, I've got something quite good to say about that comment. But I always, I very rarely do
Marie:I don't
Jon:I, cuz I, because I know it'll be read differently. Um, you know, it, it could be read with me having an angry face. You just, there's no way of, of doing that. And there's this, um, study, um, and it's misquoted and it says, everyone says, you know, 80% of um, um, communication is nonverbal as a complete misquote of the study. Uh, the study actually said it was 80% of emotional communication is nonverbal because it's very, very difficult to explain artificial intelligence, just using emotions. I mean, we would've had to have done the session as a mime. And I don't think the viewers or listeners were, the listeners, definitely wouldn't have got anything out of it. But, but the important point is, is as we're we're conveying this information, we're definitely putting emotions around it in our facial expressions, et cetera. And that simply will not happen in the Metaverse because the, you know, if you look at the, the, um, the characters, um, and look, just to be fair, I've only looked at the mataverse through, you know, the TV and stuff like that. I haven't, haven't jumped down the rabbit hole. So I don't fully speak from, um, from experience, but it looks like the facial expressions don't mim, there's no way of mimicking, you know, you're just, I mean, it's just, it's very odd. Or sometimes they have those really strange sort of holographic, sort of avatar of your actual face, but it's not your
Marie:Yeah, I mean, again, like what's, why do we not want to look at humans in their faces? Why? Why are we trying to get into a metaverse where it's all strange and different? What's, you know, it's a really good, interesting thing for five minutes, but it's not anywhere that I want to live. I want to live in a real world with real people.
Jon:Yes. Let's let, Why don't we get into, you mentioned, um, Otta, Um, I think we should, we should mention or think about some really positive uses of ai. Um, and, um, so I'm gonna throw one in, and again, I'm not sponsored, but, um, there is some incredible, uh, video editing technology, uh, this, uh, product I use called script where, um, you know, the, the, the, the, the transcription of this is turned into text. Uh, and it's done very, very accurately. Um, and, and so that's part of it. But the other part of it is, uh, Marie, if you were to say something in the interview and your cam, your, um, microphone just went dead, and there's just, even with the, the backup recording, we couldn't tell what you'd said. We've got a choice. We just delete. We just don't mention it, but it might have been a real, you know, part of a very good point you were making if you wanted to. We get back onto this and we edit it, you know, we record again, and I have to sort of splice it in, which is almost impossible to do. Uh, that the other way is, is that I give you a phrase to say, and it's like, My name's Marie and I give permission for John to over dub my voice, and then, uh, that has to be present and then your voice gets overdubbed. Notice I'm reframing on using the phrase deep faked.
Marie:Are you deep faking me?
Jon:I can't know literally, but this is, but this is important because this is about, the ethics script won't allow you to do it without having the permission of the person to do it in the first place. But if you were to gimme permission, and then what I do is I just type the phrase that you, that we lost. Thing on and press a button. It takes 24 hours for your voice to be created. And then it takes, you know, seconds to do that. And then I send you a link and I say, Marie, have a listen. Are you okay with this? You go, Yeah, that's great. And then I delete your, I can delete your, you know, so I don't have your voice on file forever. I just delete it. I mean, that's amazing. But there is a, there's a, there's an ethical trust there, isn't there? Because, um, script, Yeah, it is, but it's also incredibly, this is incredible utility, uh, to it as well. Um, so, um, maybe what I might do, I might do, um, I might, in fact, I think I will, I'll attach to this, uh, I'll, I'll insert it in a, in a moment. I will do a short, um, speech or something. It will be my voice, but I won't have said it. I'll, I'll generate it from the tool. So, Marie, what should I, what should I say is, do you have a favorite poem or. Just give us, give us, obviously everyone will just switch off if it's, um, you know, Id or something, you know, Absolutely massive. Uh, but what, do you have any, anything that I should say? Is there, is there something I could, I could read one of your papers, like the abstract of, um,
Marie:but I'm not sure. That's that interesting.
Jon:I'm joking, Ray. I was joking. Um, what, what have you got any idea what I could read?
Marie:Oh goodness. Happy birthday's not copyrighted, is it?
Jon:It'll be said, you know, it'll be like a spoken track. There we go. So everyone, uh, listen to that now.
Jon Grainger Studio:Happy Birthday to you, happy birthday to you, happy birthday dear marie. Happy Birthday to you.
Jon:There we go. So I've said that obviously Marie and I haven't heard it because, uh, we've, we've, we decided to put that in live, but that's the sort of thing you can do, Marie. And it's uh, it's very, uh, I think it's at a level obviously. I dunno what that recording sounded like. And, and, and clearly that was, that was happy birthday. So it's quite a bit being put together. I'm sure there would've been points where the intonation wouldn't be quite correct, but when you are just surgically inserting a, a sentence, it's, you listen to it and you can't tell even though it's your own voice.
Marie:So what's the, So thinking about we can do it and it seems really helpful. My kind of take is always, what can that be used for? What's the point to it, where's it gonna go? And then I see things like, well, why would I rather do an interview? I can just send you a transcript and you can just make it, and then I've not got to. Waste my time sitting on a, you know, a video or whatever doing all this stuff I can just so it is. Yeah.
Jon:Well, I think when people hear it, when it's done as happy birthday, I think it'll be really obvious cuz the emotion won't be there. It won't, it won't convey that message. Um, and um, well if it does, then, then you're right. I'm not gonna do this anymore either. I'm just gonna become, I mean, we'll literally, I'll just have, I'll just make up guests, um, and just, you know, just be, Oh my goodness me, that, you know, we're talking, we're joking about it, aren't we? But that's, that's kind of, we have a tendency to, to make a joke and then actually what happens is worse than you think. Um,
Marie:Yeah. You could find a load of people that have died ages ago, build a profile, put'em in the metaverse, and then get them to do an I.
Jon:yes. And also, um, when you're in the metaverse, you don't know that you are listening to a real human voice, do you?
Marie:No, it's actually quite scary the amount of things that we dunno if they're real or not anymore. And there's been numerous, you know, ways proposed of, you know, the uncanny valley. Is it human? Is it not human? But ultimately it's, it's, it's quite, when you go on the, so where you've got, so when, when you've got a, a human looking type robot, you can kind of, you give it more trust because you think it behaves like you and you can ascribe qualities of mind
Jon:it's just mimicking.
Marie:Yeah. And if you've got a robo vacuum, you might do the same thing. If it's, you know, it's, it's one of these experiments where you've got two toy trucks and they're going in different directions and the interpretation of that is they don't like each other. Or if they're coming towards each other, the interpretation is they like each other, and it's just giving all these characteristics to things that are just not human. So they can't have these characteristics. So the uncanny valley to try and protect you a little bit from exploitation is maybe something that's half human, half not. So it looks a bit odd. So you can kind of think, Well actually that's not really human. But then you're still going down the same kind of track of if it's interacting with you in a certain way and you ascribe trust to it, then you're gonna get to the same end point.
Jon:Marie is, is this the, I think, is this the point where they made some avatars that were so realistic, it made people feel uncomfortable so that they kind of took a step back and made them slightly more awkward? So you knew you were talking to a robot. Is that what you mean? So you are not literally trying to trick someone into thinking it's a human. You, you actually want to give them some characteristics that make them seem
Marie:Yeah, you, you want to make it. I, I mean, the thing is that we, we do this to all kinds of, you know, we'll do it to an object, you can do it to a mouse, you know? Um, so it's kind of trying to get people to understand that these objects around us just aren't humans and don't have human characteristics, but it's a tendency that's been built into us as humans to do it anyway. So it's, you know, it's, it's very difficult and really is the problem. Is that where the problem is? I think the problem comes from the way that technology's implemented in the first place and the risks associated with it. Not necessarily with, but anthropomorphism does increase your sales by apparently 7%. So why would you not do it?
Jon:Yeah. And, um, folks tend to, will only consider buying from after the eighth interaction somewhere between eight and 12. So why don't we get the first five done with a, with a bot or with some AI interactions first, You know, I'm just, Yeah. I'm just thinking it through girl. It's is, it's, um, yeah, it's, uh, very interesting where, where, where we're all headed.
Marie:it's all psychology. It's really interesting the amount of psychology that's in this. When I first started down this path, I thought it was all gonna be about modeling, but actually I've ended up in all kinds of different disciplines trying to understand different concepts. Cuz it's all about humans and the way that they interact with the world and people that, you know, invent this technology. And especially Facebook, really, they've done a fantastic job because they've got a huge amount of people hooked on things that are just not necessarily good for them and they've made a lot of money out of it. So I've got to say they've, they've done a fantastic job, but whether it's, you know, ethical is another.
Jon:that's really interesting. So you take the marketing profession and a big part of marketing is the understanding of psychology. Um, and, you know, differentiating between a want and a need. And then everything you then need to do to lay it on thick for someone to buy. Um, you start your whole digital business, uh, with a blank sheet of paper, but you start at it with psychology In the middle, you create something that, uh, will build a dependency, but if you don't have an ethical center to that, You are creating digital hamsters running around in their, on their wheels. Um, and sort of fundamentally this, I think, did you say dehumanizing? I think it's, this is sort of, um, you know, and, and we've taken quite a while, uh, to evolve some longer than others. I'd say I'm, I'm probably, uh, a bit, a bit of a laggard in that regard. But, but, um, but the technology is really accelerating, you know, us. So we're, you know, part of our brain is suited to, still suited to kind of, you know, the emergency response and all that, you know, that sort of fight flight stuff, um, element. And, uh, the psychologist knows that. And therefore the, the algorithm you referred to, you know, the one that you can't sort of put the phone down, you know, these sorts of, you know, maybe, maybe that's really better into. An avert attempt from a psychology point of view rather than some sort of coincidence that they worked out that XYZ was useful?
Marie:It's all comes from marketing's Anthropomorphism being around for decades and it's something that's just been, it was used before in terms of like, you know, the Meercat marketing and that kind of thing. But with social media, you've got the opportunity to take that to a level where you can then bring in socially vulnerable people. Um, You know, anybody really, you can bring them in, you can create a friendship. The brand creates friendship with you. It provides things that you maybe can't get in a social setting, especially the last two years where people have not been interacting so much. It's just been so easy to get them onto platforms. Um, and then you can, once you've got them in there, you can kind of build a community. And I say community, I'm not sure there are always positive communities because that's where the exploitation and manipulation can start to try and sell things to get them to behave in a certain way. And that's partially where dehumanization is coming from.
Jon:Marie, if you're in that platform and you're spending, you know, a big part of your evening maybe extends into your day on that platform with your VR goggles in and everything, it's actually starting to become your world. And the connections you're making in it might seem like friendships. Uh, and maybe they are in a weird way because you might actually be connecting with another person in the real world, but you're never gonna physically meet them. And actually it might be good advantageous to the platform that you never do physically meet them. Um, but the problem is, is that if you did that for, Oh my goodness me. Let's say you did it for three years. Okay? Let's just say that that was for three years and then that business went bust, or there was a change in legislation or well, for whatever reason, that would be like dying. You would, you wouldn't have any connections and you don't, you are not controlling that now. You don't control the world. But there are, the sort of laws of physics are slightly on your side, maybe long term. They're not on anyone's side, but you, you know what I mean? There's sort of within your lifetime, but actually you are investing your whole world, uh, um, and, uh, you know, it it, the old phrase, it's just a bit of fun. Uh, doesn't really stack up because it's got to provide a return on investment. So, yeah. Is a, So, Oh my goodness me, Marie, we've covered quite a few, uh, areas. Is there any, um, is there any area you think that we've, that we've missed just by virtue of, you know, the, the areas we discussed, uh, or, or haven't discussed?
Marie:I think there's, there's probably, um, I'm in the dehumanization areas, a really new one. And some of the research that I've just recently done on it is where we were just talking about creating these worlds within kind of social media and, and metaverse and whatever. And some of the underpinning kind of ways to do that is to create in an out groups. So you create your group and then you create some sort of conflict within another group and you create another them and then, Type of interaction, which is kind of coming from anthrop, morphism all the way through into fundamental belief systems then starts to create division. So you start to see a huge amount more division. So even though it seems like, you know, you're meeting these people and it's, it's amazing and you're fulfilling all these needs and, and it's like you say, if that then goes away, then you've got huge gap, the kind of motivation or desperation to then join another group to be part of something. And, and that is a fundamental thing that we do as humans want to be part of something. Drives us into kind of, it, it can do drive you into a, an abusive kind of, um, mindset and getting involved with thes and them. And this is why we see so much more division on social media, Not only because writing is a particularly difficult medium to communicate through, but because of the way that these groups are then constructed in order to not only interact with them, but to keep certain people in a certain community and to do certain things in certain ways and. you know, this, it happens in real life, but a lot of things are more enhanced on, on, you know, platforms and social media. So I think there's, there's kind of like we were talking about earlier, there are positive uses for technology of course, and algorithms because otherwise we wouldn't have been building models for, for decades. But I think the thinking that goes behind it and the understanding of how humans interact and how it affects humans is more of, of kind of my concern at present because that's, that's the kind of problem area that we're seeing a huge amount of issues. And, and what I want to kind of happen is for society to be benefited. I, I want the harms to stop and for people, especially kind of in the technical sphere, to not only understand more of what they're doing, but to be able to challenge what's happening out there in the world and to be socially responsible so that society is affected in a good way rather than a negative way, which is a bit of a difficult statement in a capitalist society. But we can, you know, we can try, we can work toward.
Jon:Yeah. And, um, and there's lots of different flavors, um, um, within capitalism, within any kind of, um, any, any, any kind of political system. Um, but being socially responsible should be a, a golden thread that runs through all of them. Um, uh, you know, it should be, whether it is, is a, is another question and whether or not, um, you know, um, technology has this kind of, you know, completely neutral stance in itself, you know, can be used for good or bad, uh, or it can just be left, you know, left where it is. But, um, I think, you know, in summary, Marie, it's been a really, really interesting session for me because of where we've got to and, and I think what it says, if I can attempt a summary, cause it's a huge topic, is actually getting the. Bare bones components to do in quotes, some AI is, is, is really easy. But, but doing it, uh, well and doing it in a way, uh, that, uh, won't lead to unintended consequences, won't lead to real harm, uh, actually requires pretty much a rethink of how it's being approached. And what we may be seeing is that the pace of this particular technology is really out stripping. How it can, how it should be properly consumed. And there is very little regulation. Um, and also the point you were making earlier, Maria, is that, um, some professional qualifications and accreditations wouldn't go amiss, particularly in seating where the model is, could I hear that a lot with folks who've got like a quant background, quantitative research background. Everyone says, you know, it's just the models are, don't make sense and folks are just getting away with, um, with, with murder. So, um, I want to just say massive. Thanks Marie. Really, really fascinating session. Um, and, um, have you, have you enjoyed it?
Marie:Oh, it's been a pleasure. Yeah. Always a pleasure to come and talk about my research and my business to, to people. So yeah, it's, um, massive topics. So yeah, if you want to read anymore, I've got some papers out there about them. But yeah, it's been a pleasure.
Jon:Well,
Malcom:My name is Malcom and the words that I say are generated using AI methods. I sincerely hope I am being used responsibly CTIO 1 O 1 Business Technology Simplified and Shared. Subscribe now. Sponsored by Fairmont Recruitment, Hiring Technology Professionals Across the UK Europe