Quality Bits

Quality, Cynefin and bugs with Steve Upton

Lina Zubyte Season 1 Episode 1

In this initial episode of Quality Bits, Lina talks to Steve Upton about the definition of quality, Cynefin, deleting the bugs (?!!!), and so much more.

Find Steve on:
- Twitter: https://twitter.com/steve_upton?lang=en
- LinkedIn: https://de.linkedin.com/in/stevejupton

Book recommendations:

Links to books are to Amazon and as an Amazon Associate I earn from qualifying purchases

Further reading recommendations for some mentioned concepts:

Follow Quality Bits host Lina Zubyte on:
- Twitter: https://twitter.com/buggylina
- LinkedIn: https://de.linkedin.com/in/linazubyte
- Website: https://qualitybits.tech/

Thank you for listening! Follow Quality Bits on your favorite listening platform and Twitter: https://twitter.com/qualitybitstech to stay updated with future content.

Lina Zubyte (00:04):
Hi everyone. Welcome to Quality Bits, a podcast about building high quality products and teams. I'm your host, Lina Zubyte. In this first episode of Quality Bits, I'm having a conversation with my beloved best colleague, Steve Upton. He's one of the most inspiring people I've met in quality, full of great knowledge and wisdom. In this episode, we talk about what quality is, Cynefin, we touch on bugs and categorization, and you're also going to hear why Steve feels like best practice is actually a term that frustrates him. So let's go.

(01:04):
Welcome Steve to the first episode of Quality Bits. It's my pleasure to have you as my first guest. To those that have not had a privilege of working with you, could you please introduce yourself, shorty?

Steve Upton (01:19):
Wow. Well I'm Steve. Steve Upton. I'm a QA technically. I'm born in the UK. I live in Berlin currently. I work for ThoughtWorks. I get involved in all sorts of quality related topics at work. Outside of work. I like techno and mountains and travel and reading.

Lina Zubyte (01:37):
Nice. What are you reading currently?

Steve Upton (01:40):
What am I reading? I actually just finished Gary Klein's "Sources of Power", which is a really, really excellent book on decision making and how expertise develops, how people make decisions, how we can inform and kind of build expertise, which I thought was really, really fascinating. So highly recommend that. One. I'm also working my way through "Dynamics in Action" by Alicia Juerrero, which is a great look at complex systems and some of the philosophy behind that actually, which is also great. Much more more academic, but I think yeah, well worth the read.

Lina Zubyte (02:13):
Awesome. Talking about sources of power, we both come from the field of quality, so we do work with a powerful topic. When I started writing the Quality Foundations, I had this goal of introducing what quality is or the practices that we tend to do in simple words because we tend to overcomplicate a lot of things in tech in general. And the first article I chose to write was what is quality? We talk a lot about this often too much. A lot of people have strong opinions on what it is or that we shouldn't discuss it. Others are confused about the concept. In that article, I also quoted Jerry Weinberg who said that quality is what matters to one person. What do you think about that? How would you describe quality?

Steve Upton (03:08):
I really like that Jerry Weinberg quote, actually, if you force me to come up with a one sentence definition of quality, I probably would go with that. I like it because it's a principle, it's something that we can apply in different contexts. There's lots more questions. We need to ask who is our user? How do we balance the needs of potentially multiple users? What if they're conflicting? There's lots of questions we need to ask here. This is a principle that needs expertise and for example, what if our users don't know what they need? The faster horses problem work is needed here, but as a principle, it just tells us to put the user first, make sure we're actually meeting the needs of our users. I would go further. If you gave me kind of space and time, I would definitely talk about other things that I think are relevant to quality, but I think starting with meeting some person's needs is a pretty good first step.

Lina Zubyte (04:04):
How would you decide whose needs you have to meet?

Steve Upton (04:08):
That is a great, great question and it's not typically something I spend a huge amount of time working with. To be very blunt here, this is largely a product question. So we have the entire wealth in front of us. We could try tell me to everyone's needs and we're probably gonna fail. We probably need to target a particular user segment or a particular persona. We often kind of use this shorthand of personas for different users, but that could be driven by market research. Maybe we understand that a particular user segment there's lots more people there or they have lots more money or they're more willing to actually use our service or our app or whatever it is we're building. So yeah, that's a challenging question. I find it a little bit more fun to be given a person told until, okay, this is the person that you should is the person whose needs you can meet and then explore and see the opportunities for, okay, who else can we help around here?

Lina Zubyte (05:07):
Yeah. When there's a conversation about whom are we building this product too, very often they come in subjective opinions or certain biases. We all come with an opinion on what is important to us. So how do you cope with that? How do you avoid dealing with biases or minimize this effect of it?

Steve Upton (05:30):
So there's lots of biases. I know you've given talks on looking, let's say biases when we're in this sort of space, when we're trying to understand what our users need, how do we meet the needs of our users? It's really easy to let our preferences, our desires kind of override things. I was doing some user research recently. I was talking to people and saying, okay, how would you like to access this? What sort of formats would you like this would you like to find this data in? And they gave an interesting answer, one that I wouldn't have expected. My background, the kind of tools and platforms that I like to use, I wouldn't have answered that. I would've got a completely different answer. And it's easy to think, oh, well they're wrong. I know better and try to impose my will essentially on the user. So biases like that are pretty important to keep in mind. One really simple thing that you can do to address this is to establish the feedback loops. We get to put these results in front of the users as soon as possible. I built something, I assume it's what my user wants. I should validate that assumption as soon as possible.

Lina Zubyte (06:50):
Yeah. I also love data for this. I remember actually working for one of automotive clients and we really wanted to drop IE 11, and that was back in the day and we made this big argument that we will do that. But then when we looked at the data, it was actually 30% of users still using it and we were shocked. We did not think that anyone would use it still and they were using it. So it's not always that customer or user is someone like we are, it could be someone who's totally different than we are.

Steve Upton (07:29):
Just simply remembering that there are other people in the world and they have different perspectives is generally a helpful thing. And actually really actively seeking out those different perspectives. As experts, it's easy to fall into what we could call in intentional blindness. We focus on the solutions that we know, the kind of good practices, the patterns that work for us, and we forget that there might be other patterns out there. I've solved this problem 10 times in the past, I know how to do this. It's easy to forget that there are other people who might have very different perspectives and there might be huge opportunities out there. Really seeking out those different perspectives is incredibly important.

Lina Zubyte (08:12):
So what is your opinion about the term best practice? Is in there a best practice for things, especially in tech?

Steve Upton (08:21):
So I'm not a huge fan of the phrase best practice. I mean, think about those words. Best practice. Best practice implies that this is the best way of doing something. That if you follow these steps you will succeed. There is no better way of doing this. If you tried something else, it would be a non-optimal solution. How many situations are that really true? Is that statement really true? In how many situations can you really say? There is one correct answer and everything else is kind of the wrong answer. I talk a lot about Cynefin and I think I've used this kind of framework a lot thinking about what is the context we're working in, what sort of domain are we working in? There are places where best practice works, there are places where best practice makes sense, clear ordered domains where we understand all of the rules, where everything is predictable and deterministic, but are we always working in those domains that if we're working on a manufacturing line, I should probably put the liquid in the bottle.

(09:24):
I should probably put something in an oven for a certain length of time. That's best practice. But does that really describe the work we do as software engineers? I think that a lot of the work that we do would fall much more into this complicated domain. There's flexibility here. If I take apart a car now, I'm not an expert at taking apart a car, but cars. But if an expert took apart a car, I'm pretty sure they could reliably put it back together again because that's an ordered domain. That said, that's a predictable deterministic domain, but different engineers might put it together in different ways. They might make different choices and they're all valid good choices. So here is where we're thinking more about good practices and heuristics and patents. When we start thinking about more complex dynamic systems, then we really don't have the answers and we need to take a more experimental approach there. Yeah, I think it sort of frustrates me a lot hearing this term best practice cuz it's a cause. It's a compelling idea. I have the best practice. Follow this guidance and you will succeed is a compelling pitch people want. That sounds lovely, but is it really true? Do you really have the one true answer?

Lina Zubyte (10:34):
I think we want to have one true answer. It would be so much easier if we did have one silver bullet for every situation we're in.

Steve Upton (10:43):
We'd love. Yeah, we'd love that. We'd love that.

Lina Zubyte (10:46):
I dunno if you would love that though. Maybe it would make life boring. It would because you know have one manual that's for everything.

Steve Upton (10:54):
But it's a compelling pitch, right? Something when we're talking about these complicated domains, we sometimes say that in kind of Cynefin terms we would say, well this is the domain of experts. This is where experts do really, really well. If you're an expert, you have years and years of experience. That's something, well that's something that could be sought after. Again, I don't own a car, but if I did own a car, I needed it to be repaired. I wouldn't do it myself because I'm not an expert. I would pay an expert to do that. It's very lucrative to be an expert. It's very lucrative to sell a best practice to tell people that we have the correct answer here. Yeah, it's the realm of experts and also the realm of snake oil health. And so I think we have to be careful there.

Lina Zubyte (11:45):
So what makes it actually complicated or complex here? Is it people that introduce the complexity to situations we have?

Steve Upton (11:55):
So short answer, yes. I mean that's an element of this. So distinguishing here between complicated and complex. The example I really like, and this is from Dave Snowden from I think Dave's original Harvard Business Review article on Cynefin says, complicated is taking apart a car and complex is a rainforest. Now if I'm an expert, I can take apart a car. If I pay an expert to take apart a car, I'm sure they could reliably do it, they could put it back together and it would still work. And different experts might do slightly different things, but I could reliably do that. I could trust them to do that. Can I say the same thing about a rainforest?

(12:43):
Is there any expert in the world who could tell me if I make this change in the rainforest, here is exactly what's going to happen? And they can say that with extreme certainty. I can say if I take a tree out of a rainforest, something's going to happen, but I can't tell you what and I don't think the best expert in on rainforest in the world could tell you exactly what's gonna happen. And if you did that same action again in a week, you might get a different outcome. You might get a different impact of that from that input. And the reasons here is you have lots of interacting agents, you have poorly defined system boundaries. How do you define that rainforest? Do you define it as just everything under the tree cover or do you include the climate around it? Do you include the water table below it? Do you include the species that live in it and that migrate through it? Do you include the solar cycles and the Luna cycles? It's very difficult to draw the boundaries here. These are complex, dynamic systems with interacting agents, non-linear interactions small input can lead to very large output. It makes them unpredictable. It makes them non-deterministic and unrepeatable. It's not just people, but yeah, people certainly help making things more complex.

Lina Zubyte (14:02):
We all do. So talking about this, it seems like it's a lot about context we are in. Is that correct?

Steve Upton (14:10):
Yeah, context is key. One of the key insights from Cynefin is this idea of bounded applicability, right? When we talk about a technique about an approach, it might be useful but only useful within a context only useful within a particular context that there are no context free solutions. There's a quote I like from French philosopher Jacques Derrida talking about scientific objectivity saying that even these things that we think of as scientific truth, scientific objectivity are still only true within a context. And that context can be broad. It's vast. It's something that has been fought for by generations of scientists. It's been something that's built up and established and it's something that both Derrida and myself believe in it's, but it's still just a context and there are contexts where that doesn't apply. So these best practices, what we're saying when we say something is a best practice is we're saying this is always going to work. It's always going to be the best answer regardless of context. Whereas thinking about these complex systems, we try to understand and thinking again, particularly in terms of Cynefin, we understand that it's not that a technique is good or bad, it might have utility just in a particular context.

Lina Zubyte (15:42):
It's really interesting, especially knowing that you're a consultant. So you join the team and let's say it's an established system, so it's a context that's known for the people. What would be the topics that you would be interested to understand the context better when you join the team? From quality perspective?

Steve Upton (16:06):
Usually of the first. Well, usually one of the first questions I ask would probably be around the four key metrics, trying to understand how frequently does this team deploy to production, how long does that take? What does their release process look like? How many of those changes cause failures and how quickly can they fix them? I like to ask that question because it's a proxy for a lot of other things. A lot of what I would think of as good practices, not best practices, but good practices. If you want to deploy frequently, then you pretty much need to have a strong automated deployment pipeline. If you want to have a fast lead time, you need to again, focus on strong automated testing. You need to be building tests while you are writing the code, otherwise that's not gonna work. Otherwise you're gonna have a very high change failure rate.

(16:59):
So I like that as kind of a starting point, but I also try and understand the history of the team. What's their context? As you say, complex systems have path dependency. The actions that are possible to us, the actions that are available to us are dependent on the path we took to get there. Dependent on how we got on that context, because systems have memory, teams have memory. If I joined a team, obviously I'm a huge automated testing nerd, love it. Automated testing, build pipelines, great big fan. But if I join a team and they've just had a big initiative where their management forced them to write automated tests and it caused them loads of pain and they struggled with it, what now might not be the best time to introduce another automated testing initiative? That might be something that causes them a lot of stress. It's not that it's a bad practice, but in this context right now, it might not be the right approach to take. We might wanna look at other things. Where are the opportunities for change? Where is the desire and the potential kind of beneficial attract in this system?

Lina Zubyte (18:04):
I like this point. I think people have to be ready for change and we can overwhelm them when we come with this bundle of best practices. That's something, or even good practices, if we bring them in and we say, oh, everything here is wrong, you may really give up. You may say, Hey, come on. You don't understand all the work we've put here in. And starting with little steps I guess is the best.

Steve Upton (18:33):
And not just little steps.

Lina Zubyte (18:36):
Yeah, they're big steps for them.

Steve Upton (18:38):
I mean maybe sometimes small steps, fast feedback loops. You try something, it's an experiment. Let's see if it works. All of those things are really, really, really important. But we also need to think about where are the opportunities for change here and how can we make it easier for these teams to change. When we come in and with a big list of best practices, it's this it's a somewhat colonial attitude. We've entered this new place and we're gonna tell them how to live. Hello everyone. You've screwed up. You've screwed up. Just follow our guidance. And that does, I think a couple of really negative things. It sets up dependency in I think a very unhealthy way. It forces people to, you've gotta follow our guidance, otherwise you're gonna, otherwise you're gonna screw up. But it also takes agency away from the team. You're following our best practices, not adapting them to you. And this is why I like this idea of patterns, thinking about things not just as good practice, but think about them as a pattern, something that is repeatable, but it needs to be adapted. It can change. Let people co-create this. Obviously I'm speaking as a consultant who tries it hardest not to do all of this colonial nonsense. Let the team, let the people take ownership. Let them co-create these changes because otherwise it's just an external agent imposing changes on the system. It might work, but I think not a particularly sustainable way of approaching change.

Lina Zubyte (20:21):
I love this idea because as you said, they would take away the agency from the team as well as even accountability for whatever they're building. They may not feel accountable because it just was the ideas from someone else. They may not feel motivated to stay in that team and work there because they're not respected for their authentic powers that they bring and strengths. I actually finally remembered now that there have been teams where I worked in and we would come up with our own processes, with our own ways of work, which in the end we learned how to name. That became up to this conclusion ourselves just working together and trying different things and experimenting, as you said. And then we established certain patterns, which apparently were known. Sometimes they were a mix of different patterns as well.

Steve Upton (21:15):
And that's great. This is an idea of convergent evolution. Different agents, different teams, different organisms evolve the same, or not the same, but very similar looking outcomes because they have some of the same pressures on them, but they develop them in the way that makes sense for them. The classic example here is the panda's thumb. Pandas have thumbs, they have somewhat opposable thumbs, but, and there's a huge asterisk there. Won't get into panda biology too much. They look very different to our thumbs. If you take an x-ray of a pandas thumb, it looks completely different, but it serves the same purpose because it turns out thumbs are useful and similar emergent properties can evolve in entirely different contexts if the same pressures are working on them. And the key thing is that panda's thumb is the pandas, right? It evolved that it didn't have a thumb imposed on it by humans. I realize this metaphor is being a bit forced here. Just I do talking about the panda's thumb.

Lina Zubyte (22:25):
Pandas are great. I think they always can be included. So talking about experiments and figuring out which patterns work and which don't for the team, what experiments have you run that improved quality in the teams you worked at?

Steve Upton (22:43):
Lots and lots. One that I really enjoyed. A couple of years ago, I was working in a team. This was for a ThoughtWorks client and we were, I would say, a pretty high performing team. We were doing continuous delivery. So every change we made, every bit of code that we committed had tests. It was fully automated. We did the shift quality left, the build quality in. We focused on automation and repeatability and all of that good stuff. But we only pushed to production at the end of the sprint. So we had I think it was two week long sprints and we only pushed to production after signing it off with a PO. Even though we could push to production more frequently, we chose to only do it every two weeks. We took the experiment of going to full continuous deployment, which is where every single commit goes to production.

(23:42):
And the hypothesis here was that by forcing that making every change go to production, it would force us to take on practices and approaches that would positively impact our quality. It forced us to think more carefully about the tests. It forced us to think about how we hide in complete work. Things like we implemented feature toggles as part of this work. This was something I think that we found really, really positive. It put constraints on the team, it forced us to do certain things, but it forced us to do things that were generally pretty positive. And I think yeah, it made in some ways our work a little bit harder, but it ultimately improved the sense of ownership of quality in the team because quality was never something that we could just do later. It was something that we had to think about at every step of the journey. It's one thing to say that it's one thing, it's another thing to really make that true.

Lina Zubyte (24:38):
Absolutely. It's a long journey for many teams and to figure out what experiments you run so that it wouldn't be overwhelming for the team. It's quite a skill.

Steve Upton (24:50):
And this idea of experiments I think is really important because, well, for multiple reasons, framing it as an experiment gives a little bit of safety. We say this is an experiment. We're embracing our own uncertainty. We're embracing the fact that we don't have all the answers. So we're gonna try something and then we're gonna see if it works. It forces us to put feedback loops in place to understand the actual impact of our actions. And it also gives us a bit of safety because sometimes, again, with this part dependency, we like better the devil. We like the safety of our current way of doing things, of our status quo. And there's always an escape hatch here. If we tried this experiment and it really doesn't work, it's an experiment and we can go back. We're just probing the system and seeing what happens. And that approach, I think, is much more congruent with a complex dynamic system. Something like building a team, working in a team than just saying, here's the best practice, follow these steps and you will succeed.

Lina Zubyte (25:56):
And talking about teams, so some of them may not want to experiment, they may feel in each deliver features. What would you do in this situation? Have you ever worked in a team that were actually hesitant to run an experiment? And how did you get them to do it to try?

Steve Upton (26:16):
I have worked in teams with varying levels of discomfort with making changes. I think it's important to understand why, as we said, this pattern of consultants come in, they tell you what are the best practices and they force you to follow them. This sort of colonial approach is something that a lot of people find very stressful for fairly obvious reasons why I really, really try to avoid doing this is that their experience is that what's happened to them in the past. If it is, then maybe we need to think about how we give them a little bit more ownership about how we can help them take ownership of these experiments. Or maybe it's because they tried something in the past and it failed badly. Maybe they tried to implement automated testing and it wasted loads and loads of time. They ended up in technical rabbit holes and nothing really got done.

(27:08):
It just made their lives harder. Well, if that's the case, let's think about how do we make these experiments safe to fail because some of our experiments will fail. That's sort of the point of experiments. If our experiments aren't failing, we probably aren't pushing the envelope far enough. So how do we make them safe to fail? We can scope them in time, we can limit what we are doing. We can make sure we have fast feedback systems in place so we can respond if anything goes wrong. We can come up with amplification and dampening strategies. If things are going well, how do we make sure they keep going? Well, if things are going badly, how do we limit the damage there? And what we're focusing here is the context. One of the challenges, and again, kind of Dave Snowden talks about this a lot. One of the kind of pathologies in transformational approaches, if you're doing organizational transformational or whatever, is that people get the blame, right?

(28:06):
We're gonna have a big kickoff meeting. We're gonna tell you about how this amazing transformation process is going to change your life. And then if it fails, guess who gets the blame? It's the people. Well, they didn't follow our instructions correctly. So when we're thinking about experiments, let's think instead about how do we structure the environment around the experiments, the context to make them safe to fail, to give us that feedback, focus on the context, not the people. And it's a way of stepping away from this implicit focus on the people and focus on blaming the people.

Lina Zubyte (28:41):
I think it's not only safe to fail for experiment, but also for the mindset of people that they have psychological safety within the team, that they have a good working relationship, that they're not afraid of failure because they would be blamed for it or something. So it's a lot of moving parts here, a lot of context, and I love that you said to actually get to know the history and why certain people have certain opinions about it, because likely they've had certain experiences and we have to trust them and what they've learned from it and hear them out.

Steve Upton (29:17):
As I keep saying, those perspectives are absolutely critical. Understanding where people are coming from. Past dependency is a property of complex systems. Systems and teams have memory. If we ignore that memory, we are gonna run into a wall pretty quickly.

Lina Zubyte (29:30):
Sometimes we also have certain experience, and for us it was a good practice and we bring it to the teams. For example we've both heard and seen talks in conferences about huge frameworks about categorization of bugs. What is your opinion on that topic?

Steve Upton (29:51):
Strongly negative. I'm really not fan exactly as you said. I saw a conference talk a couple of years ago about a categorization process for bugs and how a particular company categorized eight different buckets and how they visualized these on a dashboard and how they used all of these in a prioritization framework and how they flowed this work into their teams. And I had to question how many human lifetimes that have been wasted, just prioritizing bugs, kind of coming up with more and more detailed prioritization. Something I've experimented with in the past is having exactly two priorities for bugs, which is fix now. This is something we need to fix right now or we don't, if something does not need to be fixed right now, why is that? And this is a true story from a previous company that I worked with. I once saw a bug that said that the particular button was about three pixels to the left of where it should be, and it was obviously a very low priority bug. Now I've gotta ask, who cares? And then we get good hearts law. If you are, as soon as something becomes a measure, it becomes, it ceases to become a good measure.

Lina Zubyte (31:06):
Or... as we spoke, right? It's qualities to a person that cares. So if it's a product that really cares about design, maybe three pixels is a deal breaker, but in most cases it's not.

Steve Upton (31:18):
Exactly. Exactly. It could be. So I don't actually recommend you just delete all of your bugs. I recommend you go through your bugs. You figure out the ones that need to be fixed right now and you fix them. Okay? If it needs to be fixed right now, just, okay, you can probably continue listening to the podcast, but start fixing that bug right now. Anything else? Anything that does not need to be fixed right now, rewrite it as a story or rewrite it as a feature request. So, okay, this feature request is to move this button three pixels to the right as a user. I want this button moved three pixels to the right because why? I mean, what am I gonna get out of that? If there is a reason, if there is a genuine user need behind this bug, if fixing this bug is going to deliver value for the user, write it as a story.

(32:06):
It's something that's gonna deliver value for the user. So prioritize along with the rest of the work that you deliver to the user. One pattern that I've seen when we focus on bugs and bug counts and bug categorization is we have essentially kind of two backlogs, right? We have two worlds. We have the world of features and we have the world of bugs and they end up getting traded off against each other. I've seen teams and I've seen QAs negotiate for more time on bugs. They say, please, can we just fix two bugs per sprint or maybe three bugs per sprint? And they're kind of negotiating with a PO for this. And I think that's a really unhealthy way of looking at things, right? You quality becomes something you trade off against rather than something that you build in and focus on. It becomes this other metric that someone else is responsible for and is tracked in a different way rather than focusing on delivering value for our users. So yeah, delete your, well, don't delete all your bugs right away, but ask yourself if this was a story, would we ever play this?

Lina Zubyte (33:11):
So Steve, I think this is a very good point where we could come full circle and we started with a question on what is quality. So as this is the podcast for building high quality teams and products, what is one advice you would give for building high quality products and teams?

Steve Upton (33:34):
Well, I probably would go to that Jerry Weinberg quote. Again put the user first. There's a lot of questions you need to ask. Following that, you need some expertise to apply this, but it's I think is a pretty good principle to follow that might be cheating to kind of give an answer that we've already talked about. If I could cheat in a different way, maybe give some other answers. Focus on fast feedback, build feedback loops so you can get feedback on what you're doing, whether or not you're moving in the right direction. And focus on speed and adaptability. It's one thing to deliver valuable software for your user, but also make sure you can change going forward. Make sure you can be responsive to your changing user needs. Maybe I deliver the perfect software for you today, but if it takes me six months to update it, pretty soon your needs are gonna change, your requirements are going to evolve and I'm not gonna be able to co-evolve with you. So yeah, focus on the user, focus on fast feedback and focus on speed and adaptability. Three answers there. I cheated.

Lina Zubyte (34:42):
That's all right. That's one answer with three points.

Steve Upton (34:46):
That counts.

Lina Zubyte (34:47):
Good points. So thank you so much, Steve. It was a pleasure talking to you, and thank you for being the first guest here.

Steve Upton (34:54):
No worries, no worries. Thank you for having me.

Lina Zubyte (34:57):
So that's a wrap. Thank you for listening to the first episode of Quality Bits. If you enjoy this episode, let me know. I'd love your feedback, either in podcast ratings or just send me a message on Twitter as buggylina. We're going to have more conversations in the future. I'm going to add all the resources that we mentioned in the notes of the podcast, so check it out and keep on working and showing interest in building those high quality products and teams. See you next time.


People on this episode