Quality Bits
Quality Bits
Ethics in Tech and Games with Catherine Flick
How can we know that the products we're building are going to make a positive impact on the world? Could it be that it's only good for people like us, but does harm to others? What can we as individuals do to improve the products we're building?
In this season finale episode of Quality Bits, Lina talks to Catherine Flick, a Professor of Ethics and Games Technology at Staffordshire University. Tune in to learn some fascinating research and stories about ethics, tech, and... video games.
Find Catherine on:
- Her website: https://liedra.net/
- LinkedIn: https://www.linkedin.com/in/catherine-flick/
Mentions and resources:
- The exercise of the Black Mirror Writer's Room by Casey Fiesler: https://cfiesler.medium.com/the-black-mirror-writers-room-teaching-technology-ethics-through-speculation-f1a9e2deccf4
- ACM's Code of Ethics: https://www.acm.org/code-of-ethics
Follow Quality Bits host Lina Zubyte on:
- Mastodon: https://mastodon.social/@linazubyte
- X: https://twitter.com/buggylina
- LinkedIn: https://www.linkedin.com/in/linazubyte/
- Website: https://qualitybits.tech/
Follow Quality Bits on your favorite listening platform and Twitter: https://twitter.com/qualitybitstech to stay updated with future content.
If you like this podcast and would like to support its making, feel free to buy me a coffee: https://www.buymeacoffee.com/linazubyte
Thank you for listening! ✨
00:00:05 Lina Zubyte
Hi everyone. Welcome to Quality Bits - a podcast about building high quality products and teams. I'm your host, Lina Zubyte. This is the last episode of season 2. What a journey this has been! It's been two seasons and if I'm very honest with you, I'm not sure if I'm coming back for the third one. I'm still thinking about that. So let me know what you think of that. Would you like there to be a season 3? I would really appreciate your opinion. In this episode I'm talking to Catherine Flick. Catherine is a professor of ethics and games technology. We're talking about ethics in tech and how we can actually make better products with better social and ethical impacts on our lives. Enjoy this conversation.
00:01:08
Hi Catherine, welcome to Quality Bits!
00:01:11 Catherine Flick
Thank you for having me.
00:01:13 Lina Zubyte
Could you shortly introduce yourself?
00:01:16 Catherine Flick
Well, I'm professor Catherine Flick. I'm a professor of ethics and games technology at Staffordshire University. Basically, I do technology ethics and my specialisms are in emerging technologies and as my title might imply video games. So I look at the social and ethical impact in round of technologies, and particularly in video games as well. So yeah, I've worked in industry in the past, very long time ago now, and I have a PhD in computer ethics, but I also have a computer science degree as well as my bachelors, my bachelor degree. I sort of see myself as a bit of a conduit between the tech world and society. And I like to kind of think of myself as that kind of translator / connector / you know sort of person that tries to work out how do we get these two often very different worlds working together basically.
00:02:10 Lina Zubyte
How did you get into this field?
00:02:13 Catherine Flick
Oh well, while I was doing my computer science degree, I was working in industry. I was a very low, lowly tech support person in a big recording company that no longer exists. So basically, they had a whole bunch of internal systems that I was required to kind of look after and I've discovered on the way that they had a lot of personal information of the stars, you could just download the Excel spreadsheet basically. Phone numbers, addresses, email you know, like, personal emails. I mean, this is back in the early 2000s, so security was not really a thing back then. At the time, I was thinking to myself, well, what if I were a different sort of person? I mean, I could probably make a lot of money if I sort of leaked this information out to people. I'm sure there'd be some value in this information...
00:03:01
And that made me think... Mmmm, actually, what is stopping me from being that person that does terrible things? I realised it was really the fact that I had sort of an ethical mindset and that got me really interested in the philosophy. I was doing a bit of history and philosophy of science at the time. So looking at learning about how science is made and things like that, you know what actually is science and how do we construct it and stuff like that? And I was starting to think, well, maybe I can apply some of these techniques to like the technology side of things. So I started looking at the ethics of these technologies and basically yeah, that's kind of where I ended up. So I looked at end user licence agreements for my PhD thesis. I looked at trusted computing for my undergraduate thesis which an honours degree thesis in Australia is like doing a masters degree. So it's like an extra year after you finish your undergraduate.
00:03:49
And then I got a postdoc where I looked online child protection and looking at, well, the precursor to large language models, but also is back with natural language processing. And this is back in the around 2010 or so. So quite a while ago... I did another post Doc as well there somewhere in Belgium. So yeah, on how do you look at governance of emerging technologies with an ethical perspective? And this is for the European Commission. So they're very keen on how do we make sure that the technology, the development and research and development that we're funding, how do we make sure that it's beneficial to society, so it doesn't end up like another Manhattan Project or something like that. And then, yeah, so then I got a lectureship at De Montfort University and I kind of went through the ranks there, on a bunch of other European projects to do with, how do we get companies to innovate responsibly and by them the European Commission and come up with their whole responsible innovation idea and the UK Research Councils were also interested in responsible research and innovation. And so that's kind of the path I then took was down through that path and then over time I wrote very critically about Facebook a few times...
00:04:56
Then I was invited to help the association computing machinery update their code of ethics as someone that was sort of actively publishing in this area and one of the younger members of this particular field, I sort of was quite active in the conference sector and like conferences in my area and stuff like that as well. So yeah, so I was invited to do that and that was a really, really interesting experience. So we basically reconstructed the ACM’s Code of Ethics, which involved a lot of input from our members, from experts, from people who weren't members but were kind of, you know, interested in this thing. And policymakers, all this sort of stuff so. That was in 2018. And then since then I've been kind of working on other projects to do and I moved quite firmly over the time into kind of the video game sector. I started doing quite a bit of work on video games cause it's a fairly under studied area of ethics and particularly technology ethics, 10 years ago, it was still considered a bit of a kind of a hobby research and that's what my boss used to call it. He said: go and enjoy your hobby research. So yeah. So that's kind of how I got into that, because I've always been a gamer since the 80s and I've given several talks at big games conventions on ethics and games.
00:06:14
And there's a lot of interest in that because, you know, games have got quite a significant impact. They can help influence how people think. There's obviously huge ethical issues in terms of things like monetization and, you know, a lot of the psychological sides of games. But there's also a lot of misinformation about what is the value of games and parents are scared of them and things like that for their kids. So I try to kind of sift through a lot of these different ideas and try to kind of dispel some of the myths and also look at the positive sides of video games as well as identifying the issues that come with it. So that's kind of where I am at the moment now and now I'm a one of the world's first professors of ethics and games, so that's kind of cool.
00:06:54 Lina Zubyte
I really like how you labelled yourself as a connector. I feel like in the tech world, if you're a QA very frequently you're also a connector. You may also notice things like you did while working in tech support, which is the security problems that we are facing, and there's a lot of it... And I love how your curiosity led you into this area to help make the world a better place basically. I even read in your website you said that you're interested in social and ethical impact of technologies, how we can make sure the development of a new technology creates a positive impact. I really love it. It is a loaded statement though, so just to a little bit dissect it... First of all, well, why does it matter? And also what are the ethical values that we're talking about? What is ethics overall?
00:07:51 Catherine Flick
Oh boy, what is ethics? That's probably a very big question to answer, but let me just sort of make it a little bit less broad and a little bit more, you know, refined for what we want to talk about. I think the key thing that we have to think about in terms of technology and when we make technology, one of the things that we do when we build stuff is that we think that it's going to be good by default, right? We think it's going to have a positive impact in the world because we wouldn't build it like we're not horrible people. We don't go out to destroy things and make terrible things. Well, most people anyway. But most people think they're doing something that's good for the world, right? And this is a reflection of the ethical values that we hold within ourselves. And as we sort of go through and build our thing, whatever we're building, or, you know, developing or whatever.
00:08:37
We embed the values that we hold ourselves into that thing, right? And so this is quite well discussed by people like Langdon Winner who talk about the fact that artefacts have got politics. You never have anything that's just purely neutral and a lot of people will say ohh, technology is neutral, it's about how you use it. This is not actually the case. Technology has ethical values, normative expectations of behaviour, how people might use the technology built into to the technology, right? And those are shaped by the people that make that technology. And so those people that make that technology will bring with them all of their previous experiences, their environment, their upbringing, their education, etcetera, etcetera into that mix to understand, you know, what the best way to build that technology is, right?
00:09:23
So what we unfortunately end up with is we end up with a lot of very good intentions, but what can happen is that if that technology is used in a way that wasn't foreseen and this happens very frequently, bad things can happen. There's been many, many examples of this throughout the years. I mean, we can see it with the current AI hype cycle, right? The idea is that we can build an amazing assistant that will help us, you know, in our everyday tasks and be, you know, able to bid our back end call and make things more efficient and make things, you know, make data more easily processable. These are all the good things that could potentially come out of these AI tools, right? But what often was not thought about is OK, well, what's the potential for harm as well?
00:10:04
And so this is why we see things like all the, you know, problems with things like deep fakes and just all of the issues that have gone. I mean, the whole thing has kind of gone off the rails a little bit, right? I mean, there's still going to be some positive applications of that technology, but there's also some significant negative harms that have come about from that technology. And so this is kind of where I well, I bang my head against the wall over many, many years doing this right. So the idea is that...
00:10:35
When you're developing something, you shouldn't just be thinking of the default positive outcome. There should be some thought about what could potentially happen that could be negative, and I mean this seems quite simple, right? You might think, Oh yeah, you know, I think about how might someone who is horrible use this or something like that, right? And so some of those people will do that sort of level of thinking about their technologies. But often they just don't kind of want to, because the idea that they might actually be develop something that might be harmful, like it's actually quite hard to kind of process that, right?
00:11:07
But what we sort of say as ethicists is that you need to be doing this thinking, you need to be thinking ahead. You need to anticipate what the potential impacts of your technology are going to be because they might not have a negative impact on you or people like you, but they might have a negative impact on people who are much more vulnerable, for example, than you. And who are less able to kind of advocate for their own ability to avoid this technology or whatever it is, right? So it's not just about could it potentially harm someone by, you know, creating a medical device or something, stabbing them accidentally or something like that. But it could be that it allows for much more subtle harms to be made, and sometimes those are not always obvious. Now, I'm not saying that we should have like a crystal ball and we can see into the future, right? This is the other sort of argument, that I frequently hear against this, it's like, Oh well, we can't always see what the future is going to be sure, but you can at least have a go. You have a go, you can try to kind of sort out what might happen with the potential harms that you can identify. And then over the period of time that you develop and deploy this, you keep an eye on things and monitor them.
00:12:18
And I think the key thing that everybody and this is partly because the incentives for creating technology just don't align with doing this for the most part. But there needs to be for a truly ethical technology, there needs to be the ability for a company or a person who's creating that technology to say, OK, no, this is, this is not going well we need to pull it, right? And that very rarely happens. We have seen it a couple of times. So we saw it with Google Glass was an example right where basically, people had so many privacy concerns and also the price was a bit high, but basically it got kind of peer pressured out of existence. And then there's another example the Microsoft... They had a very early large language... They had a very early chat bot that was on Twitter and it learned from the conversations that had on Twitter and very quickly became a bot that spewed hate speech and stuff like that, so they pulled it despite the fact they've spent a lot of money on it. Right? But that's very rare. I mean, the fact that there are really only a few cases like that.
00:13:17
And there are so many technologies out there that really, I mean, they still have a much more of a bad impact and this is not only a good one, right? It's hard to do that. I don't know, I've kind of gotten a little bit off track there, but I I'd like to think that's sort of what ethics is, is about looking to the future about who might be harmed. And then working out if that's worth the cost of creating that technology basically. And then I mean there's other ethical issues involved there. I mean, along the way, you need to think about more kind of less.... Big... Bigly harmful - that's a bad word. Less significantly harmful, and more things like, you know, the everyday, like little harms that can happen. So things like privacy, things like autonomy, allowing people to kind of make their own decisions about what they use and things like that. But at the moment, it's just a huge mess because the incentives for companies just don't align with doing this properly.
00:14:11
And so it really becomes more of, if they do do it, if they have like, you know, responsible innovation centres or whatever and most of the big companies do have theirs, they usually are kind of a white washing of activity. They don't tend to have much teeth. But then, even in the smaller companies, the ones that we do see that try to do this properly, it tends to require the CEO to essentially buy in like and say this is how I want to build this company. And this is how I want to build the technology that we create. And I think this comes back to kind of like the quality side of things, right? So if you go in with a view to saying, look, the technology that I want to build is going to be ethical technology, I want to take all these precautions.
00:14:49
I mean one of those proportions is going to have to be that it has to be high quality because if you put out a poorly made product, it's not going to reach the ideals that you have for it in terms of an ethical technology, right? So yeah, I think that this is kind of where ethics is at, it's a lot of people who wag their fingers like me included, right, at companies that should know better but don't want to because it will destroy their hype cycle if they point out all the negative sides of their technology. But also like they're just the modern kind of incentives for funding and venture capital - they don't reward this slightly slower, more precautionary approach. And that's a problem I think, for ethics and technology.
00:15:33 Lina Zubyte
This reminds me of the Black Mirror test. They were talking about it in product management that you should imagine the worst scenario that could take place. The Black Mirror are these dystopian series where everything goes as we fear. The exercise is just encouraging us to think of these really bad scenarios, but as you say, I don't know if a lot of companies do it. I think it's a great idea and we are like ohh yeah, you should do that. But how frequently we do it? That's another question.
00:16:08 Catherine Flick
No, you're right. I mean, I do a thing with my game students, my PhD students, where we look at design... It's a well known technique. It's called design for evil. And so you're like, OK, how can I take my research and apply it in the most evil possible way? And also Casey Fiesler, she's done a thing where for, for tech ethics, where she looks at, she calls it the Black Mirror writers room. So once again, that same sort of looking at how could this be used in the most horrible way possible. The Black Mirror has been a really good way for me to justify my existence. Because I can say to people, have you seen Black Mirror and they'll be like, yeah. And I'd be like, well, that's what technology ethics is about.
00:16:54
I think it's a fabulous series. What they're doing is they're thinking about what is it that we're creating and how could we just take it to the most ridiculously horrible ends and the number of things that we've seen come to actual fruition since the Black Mirror episodes, like some of these things are actually now happening. And then before they were just these most horrible timelines, right? And so I think there's a lot of benefit that companies can go through to actually like, take the time to do these sorts of exercises. Because, I mean, I would like to think that most companies wouldn't want to be developing horrible technology. Like I said, you know, most companies that they think they're doing good by default and it tends to be sort of, you know the, as I said, the path to hell is paved with good intentions, right?
00:17:37
I think that's what happens with a lot of these companies, is that they're under pressures by investors or you know, in costs or getting the MVP, you know, minimum viable product out or you know deadlines that sort of stuff. Next quarters, financial statements, whatever it is that's motivating them to actually get things through the door that tends to then be the focus and it's less about thinking. OK, well, what is it? How do we minimise the harm of our technologies?
00:18:04 Lina Zubyte
Yeah, this is the incentives part, right? What motivates us to do certain things. And talking about companies, what would be good and bad incentives and do you have any tricks how we could make this an incentive or for companies to think about it?
00:18:21 Catherine Flick
The responsible research and innovation kind of area has being puzzling over this for over a decade now. And we've had European funding to kind of back us. The Commission's really interested in understanding how do we get companies to actually make good technology, right? How do they make it beneficial? How can they make it non harmful, right? And unfortunately, there are a couple of things that we've tried and which have kind of mostly failed due to various reasons, but the things we've tried that have failed so far has been things like... So what would be really useful is if so, you know how in when we have a sort of physical infrastructure that people that tend to build that are certified engineers, right? So if you have to build a bridge, you gotta have certain certified engineer to sign off that your bridge design is fit for purpose and it's not gonna, you know, fall apart in the middle of the bridge. We don't have anything for digital infrastructure. So you would think that the digital infrastructure that we're, you know using so things like all of the systems that we use to communicate and social media systems and things like that which have become embedded in our lives that we can't avoid this infrastructure now especially, you know post COVID and all that sort of thing.
00:19:28
Well, one of the ways that you could do that is to improve that is to, you know, make sure that people are certified. But there's a lot of pushback, right, because what is it to be a software engineer? What is it to be a computer scientist? What is it to be...? I mean, there's a lot of line drawing that would need to be made. And a lot of the traditional tech field just doesn't fit into boxes. So that's hard, right.? That's one of the hard ones that would make the most sense. And then you see, once you're certified you you've got a whole bunch of incentives for you to do it good, high quality job, right? So there's codes of ethics, you've got standards and you can be taken and lose your licence to practise software engineering or whatever it is, right. So that could be the potential. So it's a good kind of carrot and stick approach. But yeah, I mean that fails at the moment. I mean there's still doors open to that one, but the problem is how do you define these job roles, right?
00:20:25
Another one that is open is things like looking at procurement. So one of the things that we've tested to see if there's any appetite for is where the governments could say that they will only procure technology from companies that have gone through some sort of process check right? So some maybe have some sort of ISO type standards that involve responsible innovation practises. I mean, there has been some appetite for that, but once again it's one of those things that's very, very hard at the moment. They're voluntary, so they have to kind of be brought into by the company, shifting government procurement strategies is a really hard and long and difficult problem. It's not something that can sort of happen overnight because you know, there's a lot of lead up that would be required. But we've seen in Britain, if the government wants a company to do something in order to be able to operate with the government, then they generally get it done. So they just kind of need to start saying, look, this is what's gonna happen and you've got a year or two years or five years or whatever to kind of sort yourselves out. So there's still a possibility of something like that. It just needs the right appetite in the right place.
00:21:33
Another thing that we've tried that failed is things like, you know, forced well, it's almost like a certification, but sort of this idea of requiring all computing type people to be a member of professional society. And with that comes a whole bunch of responsibilities and rights and responsibilities. But once again, it's the same issues with the certification side of things. How do you actually define it? You know, is every accountant that uses excel or they are actually a computer person now, right? If you can program macros in Word, does that make you a programmer? Everybody uses technology now, so it's difficult to kind of carve out those roles.
00:22:08
Also, we try things like fair trade type labelling, stuff like that, so responsibly made sort of labelling to kind of take on the ethical consumers and sort of things that's pretty much a non starter. Although the B Corps have actually done a pretty good job of moving in that direction and making that they, they've done a really good job and in fact you can become a B Corp if you're a tech company, but they do a whole swerve of stuff including sustainability and like all sorts of like, it's a very, very involved process to become a B corp. But I think that's an example of where it might go. But once again, that's not in thing and that becomes a responsibility for consumers to kind of then make a decision, but the problem is if you've only got one company operating in in the field, you don't have any choice, right? Or if you're required to use something by work, you don't have any choice but different if you're required to use a program at work versus buying an ice cream, you know you can choose the B corp easily for the ice cream, but you can't really choose it for your job, right? So, these are some of the things that we've tried, I think there needs to be more...
00:23:15
I've sort of talked more recently about things like... Trying to kind of do a lot more in terms of educating the public about how technology actually works. And I think there's been this assumption that there's these, like digital natives and all this sort of thing that people just know how technology works. Well, I mean, I know that's not the case because I I'm on my village WhatsApp group and someone else said this morning: Oh, is anyone else having trouble with the Internet? And then 15 different, completely wrong answers for how to actually, you know, fix the Internet sort of went by. And people just don't even know how this infrastructure works. And so I guess it's bit like a car. You don't need to understand it in order to be able to drive it. But I mean, I think a lot of people just get kind of take it along and they just assume that cause everybody else is using something that that makes it good.
00:24:03
So I think you know, education will help a lot there and a lot. We can see that already in things like the video games area, because there's a lot of concern about how children use technology and children largely use video games and social media. And so I think there's some moves being made there a little bit, but it really still is a hard space to kind of navigate in terms of how do you actually get people doing this when especially like a lot of the time, you're preaching to the converted, right? And so people that want to engage with this, they probably will anyway, where people that don't want to engage. Well, good luck, right?
00:24:41 Lina Zubyte
And what the society itself can do against these big corporations, because we have these very big companies that are also influencing this whole area. What can I do as an individual? Is there something I can do? Because sometimes you may feel very helpless.
00:24:58 Catherine Flick
Yeah. Well, I think... So, back in the 1700s, there was this movement against these knitting machines that came in, in fact, in the East Midlands, near where I'm actually living and they pushed back against this new technology that was being brought in because it was being brought in as a money saving exercise. But it produced much, far inferior work. And much faster pace, right? So it was poor quality, but it was much faster. And it didn't require you to go through a seven year apprenticeship to be able to learn how to use the machine. And so these framework knitters that were the traditional knitters, they pushed back by organising. They actually went around physically smashing out the machines, right? This is the actually the origin story of the Luddites, right? These were the Luddites. They weren't actually afraid of the new technology. They were actually taking a stand against the technology that was coming and destroying their livelihood. And we're starting to see that now in terms of other technologies. So it's a bit of a mirror today.
00:25:54
We've seen unions in America push back against the AI hype in terms of things like the film industry. We're starting to see unions pop up in big companies. So it's like Amazon workers unions and Google unions and things like that. And I know Blizzard has a union as well, so games companies, I think we're seeing a turnabout of unionisation, particularly in America, that we haven't seen for a very long time. So I mean, it's a bit different in America than in Europe because Europe's always been fairly strong union wise but in I think America is a big space that we need to see more of it. That's where a lot of... I mean, they drive a lot of the technology, but I think the reason they drive a lot of the technology is because they've been able to kind of get away with it for so long, right? So European market tends to be a little bit more precautionary. But based on the precautionary principle, that's sort of driven from the top of the government side of things and it's very regulatory, much more heavily regulated environment.
00:26:50
But then Europe's not afraid to say, look, we don't want this in our space, right? And I think that's actually a very healthy way to think whereas America's like ohh, we can't say no because we might disrupt the incentives for innovation and stuff like that. And I think that's too far the other way, right? So I think now the workers are starting to realise that they have power over what their companies create, and so they're actually now starting to organise to say, well, we don't want to create this next thing that we've seen could be horrible. But you're continuing to hype up as if it's gonna be the best thing since sliced bread. And so I think that that's potentially very we're in the beginnings, I think, of a very powerful pushback by ordinary people. So I mean, I guess what you can do is if you work for a company - join a union.
00:27:36
But also I think we've got to be a little bit more brave about standing up for what our companies do. If you see a problem, you should talk about it. If you can see that this is gonna have harm, you should ask what sort of policies do we have in place for things like risk management and stuff like that. And I mean, the other things that we're trying to see are that well, I mean, I know after having done the ACM's code of ethics, we had people joining the ACM because it had such a strong code of ethics, and because if you're a member of a professional organisation you can say to a manager or whatever: No, I can't do whatever it is that you want me to do because it's against the code of ethics. If it's a problematic thing, right? And we're actually starting to see people are actually doing that.
00:28:19
Yeah. And and I think that's also another powerful thing that you can do as an individual cause the ACM can't apply its code of ethics to companies, but we can apply them to individual. And so if you have a code of ethics, or at least a code of profession, some sort of professional standing, and you can say look as a professional, I'm not happy to create technology that will do this particular harm. I think that's a really powerful thing to then say back, but I mean it comes with the risk right, though, because they could just say, well, OK, well, you're obviously not a good fit for our company or whatever. But you can't make an omelette without cracking a few eggs, right? Like sometimes you just, you know, the principle is worth the cost. And if you're in a position where you can take that stand, then that's maybe something you should think about doing. And people do that every day. People take stands and they lose their jobs, and they're very brave people. And I think that that we should as a a group of tech people, I think we should be more brave to do that.
00:29:18 Lina Zubyte
I'm glad that I think we're more and more tech savvy and understanding of what's where. I can think of examples where, for example, we saw personal details in the logs of the internal system and that doesn't sit well, you know, and then someone can just say, hey, this is not OK. And the companies do act on it. Sometimes it could be a bit of a fight because you feel alone, you know, because you're the only one raising it.
00:29:46 Catherine Flick
Yeah. I mean, I think that's also where things like professional organisations can help as well, because they usually are like I know in in the ACM we have a committee on Professional ethics and certainly for any ACM members out there, if you're struggling with these questions, do get in touch with us because you know, we're happy to support in terms of like, you know, moral support, we may not be able to give you any actual support. We can certainly give you some moral support. We look at cases where people report ACM members to the Committee for having, you know, potentially committed, in fact to the Code of ethics and you know, some of these cases can be quite complex, but that we're seeing more people. Well, I mean, we can't really talk about the cases that I see, right, but there are certainly cases where people are being held to account for the actions that they're doing, and I mean, sure, all the worst that we can do is potentially kick you out of the ACM. But for some people that's a big deal, right? Being seen as professional. And also for like academics, the ACM's being academic player so, so that's often a draw for academics, but...
00:30:49
Yeah, it's really hard kind of to be a whistleblower or whatever. You can feel quite alone, I mean I haven't done, you know, a whistleblowing event type like that to the public or anything but I mean it's about sort of picking and choosing the right sorts of people to talk to about can actually be quite helpful because they can also support you. So like I mean I don't wanna do a how to whistle blow. There's a lot of things out there where you can read a lot better instructions than I can give. But I know from people that I've talked to about having whistleblown that you can find support especially amongst other people that have done similar things right. So the sort of support groups out there for whistleblowers especially. I think a lot in the sort of post... Like in the games, for example in the post-Gamergate and the Me too kind of movement, there's been a lot of women's groups that have popped up for people who have gone through harassment and abuse on online. For example, for calling out bad behaviour. So I mean, these these sorts of things you. Yeah, you might feel alone, but there are a lot of people who have gone through what you're going through.
00:31:55 Lina Zubyte
Talking about games, I have to ask you about it because I'm very curious about, first of all, what's your favourite video game? Secondly, what's the summary of the work you're doing there? Because frequently gaming is known to be a little bit toxic like this industry, the work conditions and everything. So I do believe that the professional like you, can help out a lot. So what are you working on the most there?
00:32:20 Catherine Flick
Well, my favourite video game... I think my favourite video game has to be and in terms of the game I keep coming back to and playing so this is, you know that I play over and over again. I mean I've played a lot of games multiple times. I've played The Witcher 3, I think like two or three times through I played Dragon Age Inquisition a couple of times 3, but the one I keep coming back to is Dwarf Fortress - an amazing simulation game that involves procedurally generated archaeologies and histories, and it's just a massive game. Anyone who's ever played it will know exactly what I'm talking about. But you lose your life to it slightly, but yeah, it's one that I keep coming back to and it's always got something new. And because the graphics are so basic, it never kind of goes out of fashion either. So that's my favourite game.
00:33:07
In terms of the game stuff, what I'm doing... So at the moment I'm looking a lot at how people play games because actually you wouldn't believe it, but we don't actually have a lot of information about very basic questions and sort of research questions about how people play games. That's largely because the data for that sort of research is often locked up by big companies, and they don't want to share it for competition purposes and things like that. But my colleagues and I have an agreement with Unity, which is a big game design studio. Basically it's a platform that people, an engine that people use to create games and we have access to some of the analytics data from people that use their analytics package. The games that use that. So we can get access to information and this is all anonymised and kind of like a high level, so we don't obviously...
00:33:56
There's a whole bunch of anonymisation things have gone through like it's gone through before we get access to it, but so we have no idea who, like we don't know if... If someone plays the same game multiple times, we can tell that. We can't tell if the same person plays different games. So if they play the same game multiple times, we have that information. But we can't say they played one game one day and a different game in the next, we don't have that information, so we can't build a profile really other than gameplay within one game, and that's actually really interesting though, because we don't like this, this and we've got billions of data points on this, right. So we're over 10 roughly 10 years and it's really fascinating just to see kind of how people actually play in terms of their, you know, when do they play? Do they move when they're playing?
00:34:45
We can tell sort of cities and things so we can see if people move locations, but like large locations, you know, catch a bus or whatever and we can look at things like comparing different countries and how they play. So how they play the same games, right? So if we take a mobile game, that's a popular mobile game. We can look at how different people in different countries play it and see what the kind of patterns are there and stuff like that. So these are all research questions that nobody's ever really looked at and because mostly because the data hasn't been there. And if that has been there, it's mostly been collected by a self reporting mechanism and self reported data is really problematic, it's just not very reliable. If it's the best you can get, then that's fine, but if you can get actual real data from actual, you know telemetry type data, then that's much better quality data. So we're very lucky to have this source because very, very rare for games companies to give these out. So yeah, so that's the sort of work that I'm doing at the moment because we need this foundational work.
00:35:44
So I can go and then go and do what I find is more interesting, which is – why? We need to find out the whats before we then can work out what the whys are. So what I'm really interested in is why certain communities play in certain ways, right? So we find that... Ohh God I can’t remember off the top of my head, they're different countries that play similarly, but there's a sort of kind of like urban myth that East Asia plays very differently from North America and Europe because, you know, there's a whole bunch of different cultural things at play there. But actually we found out that China, and I think it's the US, actually play very similarly. So despite the fact that you would think there's a huge difference between the two different cultures... So I want to find out why that is, right? So there's these sorts of why questions. I'm very much of the why, because I'm sort of more a social scientist and I mean, obviously as an ethicist, I'm interested in the social impact of this stuff.
00:36:36
I'm very curious about a lot of these things that we're just very basic questions. How do people play games? When do they play games? I think the movement one is really interesting as well. So people moving from places to place... So we can see, for example, a cycle of things like the Lunar New Year migration patterns. In, say, East Asia, for example. So there's some really interesting stuff that we can look at and we can see how people play games while they do those activities and what do they play and how much do they spend on games is also spending is another side of this thing. So how much they actually spend? When do they spend it? Who actually is supporting these games? So is it like...? Is there the classic whales and minnows? Is it just, you know, a few people paying lots of money who's actually producing the income for a game? Or is it lots of people spending little bits of money?
00:37:26
And we don't really have that information because that's... Money stuff is actually one of the worst types of self reported information because people don't remember or they don't want to know, they don't want to think about it or they just lie straight out, right? So it's very unreliable - self reported financial data. So having actually that access to that is really useful to us to understand what works in these terms of these monetization practises. What does a successful game look like? I mean, so all of these are sort of more questions from the developers perspective, but these are very basic questions, right? Like the idea that you'd make a game, you wouldn't have an idea of how well it might do based on the sort of the genre and the audience that you're aiming it at and, you know, things like that. I mean, for a lot of other industries, these are fairly well known...
00:38:15
If you release a movie then you know what the demographics you're aiming it at, you sort of probably have a rough idea of how much you're going to, you know, get back, like how well your movie's gonna do if it does XYZ in the process, right? But in games it's very very hard because all that data is kept very, very, very secret. So that's kind of where I'm at the moment. I'm on the precipice of opening the door to a whole bunch of new interesting things, and I what I want to do with that, though I think is more important is that I'm really keen on understanding how and why we play games in order to find ways to encourage... What I want to find is ways to discourage bad monetization practises to start with. So all this like loop boxy stuff. I mean, can you make a sustainable game right, without using horrible psychological mechanisms to pull people into games? You know bad, like getting them to, forcing them to kind of log on every day or whatever, that sort of thing. So these are some of the things I want to kind of start to break down some of these what are perceived to be bad parts of games. And then there's other things. I mean we did the whole thing on whether the...
00:39:23
So in China, they changed the way... they prohibited children from playing games other than for, I think, 2 hours at the weekend or something like that. I can't remember the exact numbers, but we wanted to see if that actually had any impact on the amount of game play in China. And obviously we can't tell which are adults, which are children because we don't have that information. But we could see if the overall amount changed because if it did change and we had sort of obviously we have a threshold... If it did change, then obviously there must be something working, but if it didn't change then you know probably something else was happening which we don't know what it is because we can't tell that from the data. But we found there was actually not really much of a difference before and after. So there's obviously something else happening there. I mean, we can speculate and say, oh, well, they must be using other accounts or they must be using their parents accounts or maybe they're making lots of accounts or they just never played that much in the 1st place and it was all, you know, a response to something that wasn't even existing, which happens all the time as well. I mean, so these sorts of things. These are the questions I'm interested in now.
00:40:21
And particularly with regard to how children, like, I'm interested in how children play games not only to the extent that I'm interested in helping parents to navigate games, because I find that, I mean, I'm a gamer and I've got a 5 year old and he's now well, he's almost five. He's now loving playing games himself. And we sit down, we play games with him and we make sure that the games that he plays are appropriate for his age and that he gets us to help him in the hard bits and all this sort of... But I find parent friends of mine are letting their kids play all sorts of stuff that's just not suitable because they’re not gamers. They don't really understand it, and they're not interested. They just think, Oh well... Everybody else is playing whatever it is, so I should let my kid play that too, because you know, you don't want to leave your kid out, let your kid be left out. I think I'm a little bit more harsh in that regard just because of my job. I think my poor son is probably gonna miss out on a whole load of games that, you know, extremely inappropriate for his age, but...
00:41:15
You know, one of the things I really want to do is to get that communication going better because at the moment there's very little. I mean, apart from kind of people being worried about screen time and stuff like that, there's very little about how do you actually do this positively? And how do you create a positive game experience with your kids but not let it get problematic? Basically, it's a fine line to draw there, right?
00:41:38 Lina Zubyte
That's fascinating. I'm really looking forward to what you find out and I will add the link to your website so people also can check out your work as well as the code of ethics. We just scratched the surface of this whole topic, right? It's a it's a very big one, but to wrap up this conversation, what is your one piece of advice that you would give for building high quality products and teams?
00:42:05 Catherine Flick
Well, I think quality is a very key ethical issue. My dad's used to be a food manufacturer and his thing was always about quality first and everything else kind of, you know, falls in behind that and I think that he's very right in that regard and that if you have your standards set at the right level and you hold to those standards regardless of who comes at you, right? So I mean, he fought off supermarkets in Australia, which is like fighting massive lions with like razor sharp teeth. If you can stand up to the pressures that might come to reduce that quality, I think then you're in in the space where you're more likely to create good things because that puts you in the right sort of mindset to be thinking about all these things I've been talking about, right? If you're thinking about quality, you're looking to the future you're looking at who's going to be using it. You're looking at all of these questions.
00:42:54
If you want to build high quality software, you need high quality people that also think in the same sort of way. If you get the right people and the right teams creating the right technology, you'll be set. You might not be the Googles or the Microsoft's of the world. But if you create good solid software that actually fulfils a need that people have rather than just like chasing the latest AI hype or whatever, I think you're always gonna have a good, solid audience, and you might not have increased growth every year, but you'll probably bring in a steady profit and that used to be what business was about, right? Just, you know, creating a profit every year that you know and not like chasing the latest hype and stuff so slow, steady, think about the future - what the potential impacts might be.
00:43:46
And be brave about saying ohh, maybe the software doesn't live up to our our standards and you know either fix it or shut it down, right? And nobody's gonna be well, nobody much is probably gonna be mad about you if you shut it down and it's bad. So I think stick to your standards. Be brave. Be persistent about kind of holding back the storm of the poor quality AI hype-y, whatever's that's going on at the moment, I think you'll come out the other side and you'll come out well.
00:44:14 Lina Zubyte
Thank you so much!
00:44:17 Catherine Flick
Very welcome. Thank you.
00:44:19 Lina Zubyte
That's it for today's episode. Thank you so much for listening. In the episode notes, you can find Catherine's website link where you can follow her work, which is extremely exciting. And until the next time (or no next time, I don't know yet :)) - do not forget to continue caring about and building those high quality products and teams. Bye.