Quality Bits

Game Testing, Machine Learning, and Quality Engineering with Stuart Crocker

December 12, 2022 Lina Zubyte Season 1 Episode 8
Quality Bits
Game Testing, Machine Learning, and Quality Engineering with Stuart Crocker
Show Notes Transcript

What it's like to work in game industry? Is testing games just fun and games? In this episode of Quality Bits, you'll get a chance to learn the answers to these questions and so much more.

Lina talks to Stuart Crocker who is currently working as a Head of Quality Engineering with no... quality engineer roles, has plenty of stories to share about game testing, and even the use of machine learning in testing.

Find Stuart on:
- LinkedIn: https://www.linkedin.com/in/stuart-crocker-a9a5233/
- His blog: https://dragonsforelevenses.com
- Twitter: https://twitter.com/StooCrock

Follow Quality Bits host Lina Zubyte on:
- Twitter: https://twitter.com/buggylina
- LinkedIn: https://www.linkedin.com/in/linazubyte/
- Website: https://qualitybits.tech/

Follow Quality Bits on your favorite listening platform and Twitter: https://twitter.com/qualitybitstech to stay updated with future content.

Thank you for listening! 

Lina Zubyte (00:07):
Hello, hello. Welcome to Quality Bits - a podcast about building high-quality products and teams. I'm your host Lina Zubyte. In this episode I'm talking to Stuart Crocker. Stuart shares so many interesting stories about his experience in game testing. We touch on machine learning and its hype and how he actually worked in projects that used machine learning in game testing. Also, we briefly discuss his role as Head of Quality Engineering in a company that does not have testers. So enjoy this conversation and let's go.

(01:02):
Hey Stuart, it's very nice to have you on my Quality Bits podcast. Could you please shortly introduce yourself?

Stuart Crocker (01:13):
Hey Lina. Thanks for having me. I'm Stuart. I'm currently the Head of Quality Engineering at a company called Legl (that's L E G L without the A). On a daily basis, my job is about helping us as a business be as effective as we can be and the foundation for that is our engineering teams. We sell software and we sell services that are based off of software. A little bit more about me outside of work, I thoroughly enjoy looking at the stars in the planets. I treated myself during lockdown to a nice big telescope. One I've been looking forward to for a long time. And I absolutely adore looking at Jupiter and Saturn and whenever I can and the sky allow any of the nebular and galaxies that we have.

Lina Zubyte (02:00):
Beautiful. Very poetic of you to have this magical side and interest. So just to tell listeners how we met, we actually met how many years? Four years ago?

Stuart Crocker (02:14):
Four years ago

Lina Zubyte (02:15):
You're counting. Great. So we met at a Quest for Quality conference in Dublin and that conference had a theme of AI. So there were multiple talks about the future of AI or these I would say, topics that keep coming back. And there I remember us talking about the fact that we're talking about AI like it's something new. However, you mentioned that you've used machine learning in game testing years back when we were talking then, and I remember that conversation and it was so interesting to me. Could you share a little bit more about this experience? What were you testing there and what kinda project was it?

Stuart Crocker (03:05):
Yeah, of course. And it was good conversation. So at the time I'd not long left Microsoft as I worked a part of the Xbox team and I'd been spending some time working on a game. It was an incredibly interesting game something that I was really passionate about and we did some really, really interesting things. One of the key drivers for some of the things that we did was it was due to be launched as games as a service. And even back in 2012/13 when the project kind of kicked off, games as a service is something that you play on your PC or mostly on your mobile phone was still is relatively common there wasn't it a new thing but games as a service on a console with what's called a triple A like high budget title was relatively not common. And I guess building a game for games and service, it's probably a bit different to building a normal game you would've done 10, 15 years prior that you stuck on a disc and shipped out.

(04:07):
So one of the things that we needed to do was to get testing in maybe a bit earlier than a traditional game is used to. That was really to kind of force the idea that we needed to build something that was relatively easy to maintain and to not let the amount of things that we needed to fix build up too much so that we got into the habit of making changes and shipping them within a week's cadence. And so testing a game that way is, well first of all if you were to test that in the way most games were tested back then where you'd have a team of tens of software testers, then that's obviously gonna be quite expensive. And not just expensive and not slow. I mean definitely no longer equite the kind of idea that having a big bank of testers has to be slow.

(05:00):
But it's always that kinda perception that comes with it. So what we wanted to try to do was work out how can we test this game in as lightweight as possible with as few people as possible but in a sustainable way. It's basically how modernish relatively well built software is built, tested and shipped but done on a game. And that was something that we had to learn and practice. And so I was constantly looking for opportunities for us to do that. Automation's obviously gonna be a sort of key part of that, but knowing what to automate in a game, lots of even back then, lots of opinions on what you can and can't automate. The tooling that we had in place, how much we had to build. It's not like you've got a JUnit or a JS.

Lina Zubyte (05:48):
That's really interesting and actually sounds very different to what I have worked with. To understand a little bit more: what are some of the issue areas that games tend to have?

Stuart Crocker (06:01):
Well, one of the areas probably most bugs or issues come with games especially things like that. And especially for us cause it was a cross platform at the end, at least we were gonna ship on Xbox and PC was stability. Games when they're being built are notoriously crashy. They tend to not be very well optimized and that means that you end up pushing like memory limits a lot. The performance, the frame rate of the game tends not to be particularly great either, which actually slows down testing and iteration. So those kind of things kind of get in the way. So working on those things to try to improve those things is actually quite a good thing to do quite early on.

Lina Zubyte (06:39):
I do see how it ties back to, as you said, modernish testing. It's basically including cross-functional requirements from the start and the importance of it, like performance or stability. I really like that idea, but how did you tackle this challenge though and test with it in mind?

Stuart Crocker (07:02):
So in this case, one of the things that we tried, which it took a lot, a reasonable amount of convincing, was to try and use the AI within the game itself to basically play and test the game. At the same time as we were building this game, we were building an HD remaster of another game and we didn't really have the opportunity to do that for that game. So we followed a more traditional route, which is effectively script the character or the camera running through a level. And whilst generally it worked, it obviously doesn't handle things very well and the scope of the tests are restricted to what you've asked the test to do. Whereas if we were to use the AI, we could effectively get the game to play itself. It would be really easy for us to modify and get breadth of coverage across the characters.

(07:53):
We'd get breadth of coverage based purely on fact that the enemy AI in the placements would differ most games. And so you kind of have this very natural randomness, kind of like mini chaos engineering-ish, but not quite. And that allowed us to get a lot of really good stability coverage on a nightly basis. And as far as the kind of link back to AI and machine learning one of the things one of our wonderful devs tried to do was to cuz this is a effectively a mix across between a third person action adventure game and kind of RTS or two and a half third person, top down perspective strategy game quality of the AI needed to be pretty good. And so what this developer tried to do was to use the Monte Carlo simulation to build up the AI and then every night, not only were we testing to some degree the stability of the game, performance of the game, but actually we were also gonna be testing out what the AI was like. So we were getting lots of really useful feedback each night in a relatively hands off and low cost way.

Lina Zubyte (09:11):
I wonder though, how was the setup? Did you need to build anything or do some special equipment to do that type of testing?

Stuart Crocker (09:20):
There were things that we had to build to do that we had to build not just the deployment of the game, it was a fly five player multiplayer game. So in order for us to exhaust that we needed five Xboxes. That's quite a lot. Thankfully across the studio we had a lot. So we've built tooling to basically scour the network so you can find out what Xboxes are available during the night, deploy the game, that kinda stuff. But the really useful part of it all was the analytics and telemetry data that was put into the game that basically allowed us to replay a lot of the games that were played overnight. So anytime we run into problems, crashes, crash dumps that were stored and bugs were automatically created, we were able to effectively go back and through some tooling that was built in JavaScript. So based on some sort of charting and sort of timeline technologies we were effectively able to replay that game frame by frame in a very low fidelity way to see what the players did, what actions they took, what the enemies did, where they were placed, whether or not the game went through all the various different bits and bobs.

Lina Zubyte (10:22):
I love the idea that, you know, would look at data and then you would even check out what happened. I think that's the undervalued parts sometimes. So working with quality of something, the data and analytics because you look back and you can find some bugs and it's not just one person checking it out. So it's using the tools in order to find lots of issues. So is there any other examples that you have from using machine learning?

Stuart Crocker (10:54):
Yeah, we've got one other one I think I remember us talking about. So we got asked a bit before the last example to build a children's interactive TV show based off of Sesame Street using the connect sensor from the Xbox 360. And it turned out that the kinect, the first one at least wasn't really designed to process the bodies and skeletons of little people, four, five and six, seven year olds. So our devs had to do a reason amount of work to use the raw data that came out of the kinect and then build our own gesture recognition system off the top of that. So this was more of a machine learning exercise where we taught the gesture recognition system to learn what different gestures were throwing a ball or catching or jumping up and down or spinning around. And we would do a couple of things.

(11:49):
Obviously the iteration on the gesture recognition system was done as a group. We would try and get as realistic data as we could do, but then we also used that for effectively by regression suites that were running lightly. So we would have some baseline data which we were really confident with that had been appropriately tagged up. And then we would feed that kinect data through the gesture recognition system and we'd wanna make sure that the gestures that we knew were in any given feed were appropriately recognized as an output. So we had training data, we had testing data and then we reiterated on things on a sort of really regular basis. So again, what we tried to do there was we are using as much of the technology that we were using to build the thing to test the thing as well. And I'm really lucky I've got software engineering background, these kind of things, they come relatively sub second nature. I'm quite lucky in that respect. And so for all the projects that I try to work on, I always try to use as much of the technology that we're already using to help us test. So both those were really, really interesting projects. Really lucky to get to work on them and to approach testing them in maybe a slightly different way than maybe would traditionally be put together.

Lina Zubyte (13:14):
Yeah, I remember this because we were talking about the use of machine learning and testing and at that time I also was working with testing a chat bot I think. And my learnings there were that it's all about the quality of the training data or test data that you have. And very often data samples are actually made by other humans, so they could be very biased. And I felt like our chat bot was speaking, one of the developers was <laugh> because we actually hard coded it should be that way. Yeah. And this use of data and the importance of it made me always think that, oh yeah, we should have open source data because then we can make it very diverse and good samples. How did you make sure that you have diverse and the great samples in the projects you worked in?

Stuart Crocker (14:10):
So that's a really good question. So again, given the problem space, we were kind of forced in a way to work maybe in a different way again. So we were building an interactive experience, which had probably never really been built before, definitely not what the technology was designed to do, at least anyway. And so we effectively had two fairly big unknown problems there, which was firstly, is this gonna work? From a technology point of view, we weren't really sure, we didn't know. And the other one was, we didn't even know if it was gonna be good or if it was gonna be fun and were kids gonna use it. So one of my colleagues in the studio did an amazing job of reaching out to so various people to the community to get from a very early points in the development process, children in to play their various interactive episodes even from a prototyping stage.

(15:05):
So at one point if I remember correctly, we effectively had what looked like a PowerPoint running on an Xbox 360 with rough sketches of Sesame Street characters and some rough audio. In fact, I think I even recorded some voiceover for it at one point, which must have been incredibly scary for the kids, bless them. And then, and we managed to get some children in to play with that and obviously with a massive amount of caution around what data we stored from recording the sessions with the kids, we would only ever store the skeletal data. So basically the point data that kinect had, it was never traceable back to any individuals. No, no PII data in there, what whatsoever. But yeah, that's how we managed to get ourselves a kind of really diverse training step because not only are kids incredibly diverse themselves, and we had a really diverse sample, we had a really reasonably diverse studio as well, which is really cool.

(16:07):
Kids do lots of crazy bonkers things, <laugh>, and they're all, they're almost incontrollable. So quite often a reasonable amount of time was spent with the data, getting it all tagged up, make sure we understood what was going on. And so there's plenty of variance in there for our gesture recognition system to have to cope with. So I guess no surprise, gestures probably weren't as strict as the gestures that you know would have on your phone, but equally they had to be semi recognizable as a throw or a jump or a spin or a sort of sit down. So I mean there was a really challenging problem for the team to solve and it was a lot of fun to, and a really interesting problem to find a solution to.

Lina Zubyte (16:48):
Oh, I love these stories and I know that they are not in your daily work right now, but I just had to ask about them because I think they're worth hearing about. And I promise this is the last question about the game industry, but I see you empowering game testers a lot on social media and in general, game industry sometimes gets labeled as toxic and the game testers may be undervalued. What is your stance on that? Why is it labeled toxic? Is it hard to work there? What are some of stereotypes that are there as well and the conditions maybe?

Stuart Crocker (17:34):
My own personal experience started off horrendously. I fell into a game testing role outta university when my software engineering job that I'd secured prior to graduating fell through last minute. And I became a technical certification tester for one of the platforms. And on the whole, I really enjoyed it. It probably still is the most fun job I've ever had, but there were definitely instances during that time just for the six months that I was there, where it was quite obvious that even though I was sitting in a room with people with physics degrees and chemistry degrees and all sorts of really, really intelligent people, not that that's important, but generally treated by management as, I don't like the word resource, but it's about as bad a definition of labeling a human being as a resource as you could imagine at times. And after that, I've been generally relatively lucky.

(18:41):
I, I've been able to manage my own experiences relatively well, but then I come from a very privileged position and not everyone has the same opportunities that I do. So I've always tried in where I could within my sphere of influence to make sure that the types of experiences that people I was hearing were having, or even I had myself, we did what we could to make sure that they weren't repeated. As for why it's really interesting and difficult question to answer because when you start digging into it, none of it really makes any sense. It's just bizarre. But at the end of the day, I guess it's just that age old case of on the whole, I feel that software testers in games are just seen as people who play the game. As in you can be a good software tester in games if you know how to play games.

Lina Zubyte (19:35):
I mean, it's fun. You just have fun all the time.

Stuart Crocker (19:39):
Exactly, and that was the worst of my experiences was when a group of us got asked to work on the weekend and I guess all of us said no, we were effectively told that we had nothing better to do anyway because that's all we'd do when we were at home would be playing games. So we may as well be in the office playing games and getting paid for it. That kind of underlined the idea that this is what we're working against and it's just really difficult for people because it's hard enough in some of the bigger software companies to justify why you have a software tester or why you do software testing, what not even having people assigned specifically to that role, but just the value of testing the number of higher ups that have said to me, Stuart, we need to get rid of manual testing.

(20:38):
And I'm like, what if that happens within the software world? In the games world, it is even it's gonna be magnified. So oh, it's really sucky. That's why I have such an affinity for people who class themselves as QAs or software testers in the games industry because in my experience actually they tend to be the best testers. And what I mean by testers, what I mean is they don't always have the capability or the opportunity that we have in software to just do some of the basic things like just use basic abstractions to isolate one part of the system from another part of the system. So that level of control that we might normally get when testing a piece of software in a browser, for example, you really have that, unless it's working for a particularly mature studio or the tooling that they use has been built to do that.

(21:32):
And there are some tools that do that. But as a software tester in games, you are having to isolate individual problems or individual hypotheses or if I do this, then this might happen whilst also having to take into consideration break pretty much the whole of the rest of the game. To do that and to do that really well is such a skill. I generally find that the really good software testers from games, they are fantastic. Their ability to problem solve and manage incredibly complex situations and come out with succinct problems or things or bugs you or whatever if you wanna call them, I think they build a phenomenal capability to do that. So that's why I have such an affinity, it's not great fun. And generally they are massively underappreciated for the level of complexity and skill that they have.

Lina Zubyte (22:31):
Yeah, I guess also the stories that you shared will help people listening to this to appreciate the testers more because it's so much more than clicking on things. The stereotypes we get, and I know you're the hugest fan of testing and very often when there's a discussion you say, actually this is testing site reliability engineering that's testing. Stuart comments are like this is that, and you're like, this is testing. So as a big fan of testing currently you are head of quality engineering, but as far as they understand, your company has no testers as a role. Is that correct?

Stuart Crocker (23:21):
Yes

Lina Zubyte (23:24):
Yes. So what does your role entail? Could you explain a little bit more about it and why don't you have any testers as a role?

Stuart Crocker (23:34):
Okay, so I'll start with the why don't we have any testers as part, when I joined the company last year, 2021, the company had two development teams. They'd been running the service for a reasonable amount of time with those two teams. They've grown a bit during last year and they'd been doing that without any testers. So it was an opportunity for me to take a step back and kind of look at what that is and how that's working. And then I thought, I don't need to rush into any conclusions. Company seems to be doing all right at the moment. I needed to understand why they hired me, what did they want to do, what were the ambitions? And from that, just keep an absolutely open mind. And the evolution that we've gotten on has, I guess enabled us to continue the journey that they'd already started without introducing specific role of a software tester.

(24:35):
So for years I'd been very much in the belief that everyone's a tester, and in this case, to some degree they were, they probably didn't realize or recognize that how much testing they were doing, but they were obviously doing some because again, the company was relatively successful, we had lots of users, it was making money and all that kind of stuff. So they obviously doing something right. So I thought, listen, look at what we wanna achieve and where we wanna go to. And then just be honest with myself and ask is adding a person or two people, three people with the title of software tester or even quality engineer, whatever that means, into the mix, is that really, really gonna help out? And fairly early on, I didn't believe so. In fact, I guess I took the stance on the balance of would it help or would it hinder, took the stance that maybe it would hinder more.

(25:33):
And what do I mean by hinder more? Okay, well where do you wanna get to? To a point where we are shipping just faster than what we were shipping without impacting the quality and in fact trying to get the quality up as well. And if I want us to grow faster, then generally our iteration cycles need to be really quick. If I can remove any excuse at least early doors from people reverting to the usual, throw it over the fence, then I was gonna try and do that. And I guess that's what I did. And not quite a year in yet. I get asked a few times whether or not I want to start finding some people to fit that bill. And based on where we are on our journey at the moment, I'm still not convinced it's the right thing to do now, but I'm not either in a position where I think that that's never gonna happen. I think eventually they'll come a point where the role that I'm playing at the moment is gonna need some additional support. And how we go about doing that will determine whether or not we hire somebody or a couple of people that might not necessarily have the sort of default software engineer title like everybody else in the engineering team does.

Lina Zubyte (26:51):
Yeah, it's funny actually, once I was in the conference and there was a debate, if every team needs a QA and I was playing the role of the advocate for not having a QA and everyone kept losing to me because I had all the arguments, right? Because if we actually have this quality mindset in the team, sure the role may not be necessary because everyone tests, testing is an activity. If we're very well skilled and we do all kinds of processes and ceremonies, we should be able to cover it. The issue very often is that there are rare teams which manage to do it well. So this is why we keep thinking about of course, experts in this area. So Stuart, what does Head of Quality Engineering do then? What does your role entail?

Stuart Crocker (27:55):
I'm still learning, if I'm honest, and frequently making mistakes on that journey. So when we first sat down and we kind of talked about, well what we gonna do, where do we wanna get to? What is the business needs? I guess I took quality engineering and I didn't really have a formal definition. In fact, my previous company I probably fought to against it quite a bit, which makes my current title even more ironic. But I guess I try to understand, well if quality engineering is a thing, what is it and what are the outcomes from good quality engineering? What's the difference between a team that isn't doing or has no support to do quality engineering and a team that does? And I guess I came up with two things. No surprise what they are, it's speed and opportunity and one is how can we go faster?

(28:51):
And one is at wherever pace we're going, how do we make the most of the opportunities that present us? So what I mean by that is when we ship a feature, a feature has a certain amount of potential to succeed. Some of that potential is in our control from a technical architectural design build quality point of view. Some of it's not, which is our understanding of the customer, the user base, the domain, but the bits that are in under our control and are relatively easy to manage, those are the things that we need to make sure are absolutely spot on so that when things get out into the wild and they get used by our customers, the feedback is about the feature and how useful the feature is to people and to whatever it is that they're trying to do against this doesn't work or this does weird things. Or when I click this I get this error message. Or when I try to do this, this doesn't work four times outta 10. Those are the kind of areas where when I talk about making the most of our opportunities, I talk about how do we build software at speed to the appropriate level of quality level qualities.

Lina Zubyte (30:01):
Okay. And how do you do that?

Stuart Crocker (30:04):
Great question. I'm still not sure myself if I'm a hundred percent honest, but hey, we're trying. So what I wanted to do was to learn from some of my prior mistakes from other companies whereby I wanted change to happen as quickly as I could think it. And that didn't always work. It's not surprising to me now, but at the time it probably was, and I know it was because I often found it quite frustrating. So one of the things that I absolutely knew I needed to do this time was I really needed to manage my expectations of the rate of change that we could go on. Even though we only had a relatively small team, like 10 software engineers to work with to start off with. So I wanted to also match the company value. So one of the things that really, really attracted me to the company I work for now is the culture.

(31:02):
And I know that word gets used a lot, but I've found that even though it's a kind of startup scale up and it's moving really fast and it's working in a relatively kind of new market, the people within the company just generally have a really good way about them. What I'm trying to say is when I compare that to previous places, not that they were bad, it was just that there was the usual kind of dysfunction when it came people to people, when it came to just talking over each other or dismissing people's ideas or not even listening to people that that's not here at all. So I wanted to embrace that as much as possible, if anything just for me so that I could learn and try to become a better person. So I wanted to set out a roadmap just to give people an idea of the general direction that we were going in over a long period of time just to try and not just inspire, but to almost to try and remove some of the sort of scariness around it by, well what I mean is by that is here's just for your idea.

(32:07):
So you get picture in your head, here's the scope of the things that we're gonna be aiming for first when we're gonna get to them, who knows, but here's a rough order that we're gonna go in and we're gonna, we'll jump from place to place as we need to. So it kind of tried to turn it into a bit of a tech tree in an RPG game where you've got some foundational things that you need to get up and run in and then they branch off depending on the problems and the challenges that we, and the opportunities that we have that present themselves as we are going on this learning journey. So we started off there, I kind of really wanted to take this scariness away from it. So almost everything that I've introduced or wanted us to try out, I've introduced it and then left it gaps like weeks probably a bit more at times.

(32:58):
Just to again, just that sort of soft introduction. Here's the thing, hey, and we might give this a go, but don't worry about it for now. But hey, we might if the opportunity comes up, we might give this a try. That way what I'm kind of doing here is just showing people possibilities and then it is really very much a case of can we try it? If there's enough desire to give something a go because it fits with retro feedback or some dysfunction or something that we know we could do better if the thing that we would like to try is a tool that could help us to solve that problem and people want to give that a go and we can give that a go. So that's kind of where I'm at at the moment. It's very much the case. We're going along that roadmap.

(33:41):
We're trying different things. So that tech tree covers things like basic testing practices, take taking good notes, running ensemble, exploratory sessions and doing that in an effective way. There's bits in there around observability and testing in production. There's bits in there around how do we design our systems to be testable and what does that even mean and how do we even do that? And so whilst the peripheral parts of it look like testing things, the vast majority of them are really engineering things. They are, how do we design our software differently? How do we communicate and work collaboratively with each other? How do we go from predominantly of paralyzed individual work to working in pairs to working in ensembles? How do we go from a discovery technical planning build process, which before you even write a line of code, it takes you three weeks. How do we condense that into three days?

(34:43):
So a lot of it is quality engineering in that here's a whole host of what are generally considered to be good engineering practices and I'm just trying my best to layer them on one after the other in the logical way so that we go on a journey together that kind of makes sense and we, we've done a reasonable job. Sometimes we've gone a bit backwards, sometimes we've accelerated way a bit faster than I thought we would. But kind of long and short of it is my role is about taking our engineering department on a journey to get us to a place where we're effectively continuously learning and we wanna be really, really successful and make the most of the opportunities that present themselves to us. Then we've got to be in a place where we can very, very quickly learn, make mistakes, learn from it, and move on and do that with as much of a perspective of the landscape in which the business and the company runs.

(35:39):
So that we are always business led rather than what I guess a traditional engineering team does, which is kind of tends to be more technology led, is how do we keep what the business objectives and the business outcomes are people use, be like OKRs and those kind of things. But it's kind of the next step after that. Okay, so you've got an OKR, great, okay, what do we do to implement that and make that successful? So that's where I sit very engineering focused at the moment, but really my role covers probably two thirds of the business through design, product, through sales, our customer experience team. Eventually it's going to permeate bottom up throughout the business.

Lina Zubyte (36:22):
Wow, Stewart, that's a lot. It does sound like Head of Engineering role is really interesting and I really love that you're actually using gamification there as well. So this game testing in a sense is following you and you're still using the skills you got there and the knowledge on how to make things interesting to people. So you gamify and this helps in driving change. It looks like our time is almost over, so I feel like this could be a great place to come full circle and ask you what is one advice you would give for building high quality products and teams?

Stuart Crocker (37:10):
Great question. One piece of advice is always focus on the business and what makes and will help the business be successful. I know normally they say focus on the customer, but any good business will know that they need the customer to have a great experience and to be successful in order for the business to be successful. But there is always extra bits on top of that. So focus on the business, what the outcomes of the business are, recognize where the customer fits into that, and do your best to focus on making the business successful.

Lina Zubyte (37:48):
Wonderful. Thank you so much, Stewart, for your time. I love learning more about game testing and your role and I feel like we may have quite a few topics for our future podcasts. Thank you.

Stuart Crocker (38:02):
Thank you for having me. It's been a good one.

Lina Zubyte (38:05):
Thank you so much for listening. Stuart's details are gonna be in the as usual, please leave your feedback, subscribe, contact me. I cannot wait to hear your thoughts and opinions about this podcast. And until the next time, keep on building and caring about those high quality products and teams. See ya.