Quality Bits

Beyond Testers: Rethinking Quality in Software Development with Elisabeth Hendrickson

January 09, 2024 Lina Zubyte Season 2 Episode 10
Quality Bits
Beyond Testers: Rethinking Quality in Software Development with Elisabeth Hendrickson
Show Notes Transcript

Does a separate QA department help or hurdle the company's road towards quality? What about people having titles with "quality" in it: does it aid to drive change in the company or the opposite? Can there be a high-quality product without dedicated testers?

In this episode of Quality Bits, Lina talks to the author of "Explore it!" Elisabeth Hendrickson. They debunk many myths of what a high-quality product development and testing actually is. Tune in to hear more about why testers may be on a verge of a role revolution, the power of a whole team testing approach, and so much more.

Find Elisabeth on:

Mentions and resources:

If you liked this episode, you may as well enjoy this past episode of Quality Bits:
Removing QA Column, Driving Change and Nurturing Fun in Teams with Finn Lorbeer
https://www.buzzsprout.com/2037134/11462360 

Follow Quality Bits host Lina Zubyte on:


Follow Quality Bits on your favorite listening platform and Twitter:
https://twitter.com/qualitybitstech to stay updated with future content.

If you like this podcast and would like to support its making, feel free to buy me a coffee:
https://www.buymeacoffee.com/linazubyte

Thank you for listening! ✨


Lina Zubyte (00:06):
Hi everyone. Welcome to Quality Bits, a podcast about building high quality products and teams. I'm your host, Lina Zubyte. Elisabeth Hendrickson has been on my wannabe guest list for a while. Elisabeth is the author of Explore It, a book on Exploratory Testing. She has been in the software industry for quite a bit, so she has lots of experience and knowledge. Some of the topics we're talking about are actually that quality in your title can be more harmful than helpful and that testers may need to think of different roles because more testers does not exactly mean better quality. Enjoy, this conversation.

(01:04):
Hello Elisabeth. I'm so happy to have you here as my guest.

Elisabeth Hendrickson (01:11):
Oh, hello. Thank you so much for having me. I really appreciate it.

Lina Zubyte (01:15):
It's so nice to talk to you. You're a very special guest to me because I started testing in a sort of a traditional way. We had Excel sheets. We would say verify this, click on that and then see if this happens. Then mark big pass or fail, and I read your book "Explore It" about exploratory testing and it was so playful. It was so light. It wasn't such a huge framework. It was more like have also a structured exploring, which is another topic because a lot of people think it's just ad hoc and you do whatever, and I really liked the style of it and I remember interviewing later on for my further roles and someone asked me about some testing books and I said your book, so I'm so glad to get to talk to you.

Elisabeth Hendrickson (02:04):
Thank you so much. That's so sweet. I'm so glad that you found it helpful.

Lina Zubyte (02:08):
I did and I keep recommending it as well, so if anyone's listening wants to learn a little bit more about exploratory testing, check out "Explore It". I think it's a bible of testing. It should be read for sure.

Elisabeth Hendrickson (02:24):
Oh, you are so sweet.

Lina Zubyte (02:25):
What are you up to recently?

Elisabeth Hendrickson (02:28):
Oh, so these days I have my own little tiny, tiny company called Curious Duck Digital Laboratory, and what I'm actually doing that earns money and keeps my dogs in kibble is consulting. But what I'm trying to do that is honestly a little bit more of a hobby. I'm building a simulation of work flowing through a software organization, so where maybe you have a release that has a whole bunch of interdependent teams that are developing pieces of features. How do you think about the work flowing through the system and how do you think about how long it takes for things to get all the way through and feedback cycles? Because we all kind of intuitively have a sense of what's true. Sometimes our intuition is wrong, which becomes really interesting, but how can we actually have a simulation that flows work through a system with all of the feedback cycles and interdependencies and see what happens? That's what I'm building a simulation for, so that's what I'm having a lot of fun doing. I don't know that it will ever turn into something I directly make money from, but it's certainly fun.

Lina Zubyte (03:40):
That's lovely. So I thought to talk to you because I saw a post on LinkedIn. I really could resonate with that post because you said that you moved away from quality in the title or as specialty. Could you tell a little bit more why you decided to write this post and for those who haven't read it, what did you say there?

Elisabeth Hendrickson (04:02):
Sure. As part of the consulting work I've been doing, I've been acting as an interim VP engineering sometimes that's one of the services that I offer and I realized that one of the things that might be interesting for organizations that think they still want a quality or a separate quality organization is to step into a role of an interim Head of Quality and then help that organization either address the forces within the organization that makes them think they want a separate quality organization or set them up for success with a separate quality organization. Because so often in my career what I've seen is that the separate quality, a completely independent quality organization might look great on paper, but in practice ends up creating an enormous amount of, I'm going to call it churn within the organization, but what I really mean is that inefficiencies, long feedback cycles feedback that doesn't necessarily help us deliver better product arguments about bugs.

(05:09):
That's a bug. No, it's not. Yes, it is. The quality organization raising really important information and then being ignored because of the schedule and then being blamed on the other side. Why didn't you find it? We did find it. We put it in the bug tracking system and you all deprioritized it, so don't come back and blame us. Right? All of these arguments don't help produce a better product and end up taking a huge amount of time. So my thought was what if I could help organizations figure out how to do quality better and maybe the answer isn't to have a separate Head of Quality.

Lina Zubyte (05:44):
I have been the Head of Quality.

Elisabeth Hendrickson (05:47):
I see that. Yes.

Lina Zubyte (05:49):
I was very happy with that title when I got it because I was like, wow, the head of quality, I really like that it's fancy, but then I also applied for jobs that were engineering manager just to get immediately rejected because I do not have, I never had this title, so they're like, what are you applying for? It is quite challenging sometimes having this "quality" in the title and I feel that sometimes it could even limit us in this sense. Very often quality, heads of quality people have understanding of processes, of the inner workings of the team and in general engineering. They could be good engineering managers, but sometimes having it in your title a little bit limits you even. What has your experience been on that? Could you share some examples where having quality maybe even in your title prevented you from driving change?

Elisabeth Hendrickson (06:48):
Oh, absolutely. I'm thinking about a dysfunctional organization I worked for a very, very long time ago where I had the title of Director of Quality Engineering and yes, I also thought, Ooh, fancy title, I'll be able to help and drive change and anyway, it didn't work out that way at all and being quality in that organization meant that I was kind of shunted to the side. There was an architect who decided architecty things. The architect was basically the CTO and there was a VP of engineering who decided process and engineering practice things, including how we did builds. Notice, I'm not saying continuous integration, this was in the 1990s. Continuous integration wasn't a thing at the time, and in fact, one of the changes that I was trying to get through was, hey, what if instead of the developers all checking in their code to a central repository and then a build engineer spending a month, I'm not kidding, a month getting a working build that didn't have compiler errors, right?

(07:53):
Working build didn't mean it functioned. It just meant that there weren't compiler errors. What if instead of doing that, whenever a developer thinks that they've got something they want to check in, they get the latest version of the code and compile locally on their machine and the VP of engineering informed me that is not possible. It cannot be done. What you are asking for makes no sense. So I do believe that my role, and that was the company where I really internalized that was the company I was at when I wrote "Better testing - worse quality" because I realized that I was not going to be able to address any of the underlying systemic issues that were leading to the outcomes. All I was going to be able to do was lead a team to find bugs and then we were going to argue about them and the end result was going to be actually worse than when before I arrived when the developers didn't think that there was any backstop and so they were testing a whole lot more before I got there.

Lina Zubyte (08:51):
I read this article that you wrote, "Better Testing, worse Quality". The interesting part is that it was written in 2000, right? The year 2000. It's still so relevant today. I just quoted it in the project I'm working in right now last week because there was the same conversation. There was this mindset of testing quality in we need more testers. We need a dedicated tester to test the things that we're doing because we don't want to test it. And when I read this I was like, oh, I should send it to them. So I sent it to them and then some people actually were like, oh yeah, I actually see this person in that person. They started seeing the characters of this article. I think it's such a good read by the way. It's wonderful and I still see that it's very common for developers to want someone to gatekeep it. I mean, it's normal, right? It's sort of psychological that we would want someone to say, Hey, we did a good job or not. We fall into these roles. Sometimes we may say, okay, I'm a gatekeeper and someone else is delivering. And why do you think it is a little bit harmful and more harmful than helpful to have the independent testing in companies and siloed testing rather than having the whole team doing it?

Elisabeth Hendrickson (10:20):
Yeah, there are several reasons. The first reason is even if the dynamic is healthy, even if the testers are providing information that the rest of the organization finds valuable and is on target for making a better product, even if everything is super healthy, the delay introduced by having to finish over here, the thing that the developers are doing, get it packaged up, whatever that looks like, and then hand it off that handoff to then somebody else is going to go test it. That introduces a feedback delay. And one of the things that we know about feedback cycles is that when they are longer than, it overall takes much longer to get things all the way done. And so that is going to slow down the organization. Now, that's in the best possible case. The only downside is we go a little bit slower, but so few organizations live in that best possible case.

(11:21):
It is much more likely that because there's a lack of integration between that independent set of test people and the developers that they actually have been set in slightly different trajectories that the developers have been told, this is what success looks like. These are the requirements for lack of a better word, but this is what you're building. And then the testers, every test they create is a new requirement, so they may be introducing injecting work sideways into the system with their new requirements. And I didn't really understand this until Ron Jeffries at one point at some mailing list said something about how oh, testers are making up requirements and I was thinking about a project that I had done at a different company in the 1990s from the totally dysfunctional company. This was totally dysfunctional in slightly different ways, but I was the tester and I'm like, I was not making up requirements.

(12:18):
I was finding legitimate issues that our users were going to find, but I remembered a meeting that got so tense, the developer actually blew up, like screamed in the meeting. Nobody ever said that this thing had to <...>. The thing that I had been finding bugs about and as far as I was concerned in that meeting, well, I don't care if nobody's ever said it, we make a commercial product for consumers. Consumers will do all kinds of wild things, so you can't stipulate that the software shouldn't be able to handle these kinds of error conditions. But then I thought about Ron Jeffrey's comment and I realized I was in fact making up requirements and injecting them sideways in the system. And because I had that ability to push and because this particular organization was in chaos that everybody was trying to tell everybody else what to do and nobody was sure who got to make what decisions, there was not a strong product organization that was able to say, yes, we care about those risks.

(13:25):
No, we don't care about those risks. Stop bringing them up. There wasn't anything like that. And so it was kind of the loudest person wins and this was 1990s. I was fairly young, I was kind of loud, a little obnoxious, and I got my way a lot just by sheer force of will. And so I realized that I may have damaged the company by insisting that a certain class of risks were important when maybe they were, maybe they weren't. I don't know. We delayed the release because of the issues that I was finding. Should we have, I legitimately look back at that time and I don't know, the only hint that I got was many, many years later I saw somebody posted on some social media somewhere or something about how that release of that consumer product was the last good one there was. And so I actually reached out to that person and had an email conversation with them to say, wait, I worked on that product.

(14:18):
I'm confused. Could you say more? Because at the time I was so embarrassed by what we produced. I refused to wear the Team T-shirt for years and this person told me about how the quality was actually good in that release and it went so downhill after. And so I think I did it good, but I'll never know if that company which is now dead by the way, would have survived if I hadn't been so insistent that we had to delay to fix the bugs that I was finding. Anyway, this is a super long-winded answer to your simple question of what can go wrong. The very best case is we go slower. The worst case is now you have everybody kind of fighting about what is and is not in scope and arguments that take up time and that don't add value, but do add stress to everybody's job.

Lina Zubyte (15:12):
I wonder though if it's making up requirements or is it uncovering the requirements?

Elisabeth Hendrickson (15:19):
Totally fair question and that's why I disagreed with Ron originally, but then I realized I was kind of usurping the role of a product manager.

Lina Zubyte (15:30):
They did not specify it. So that's what you are thinking about and all this example makes me think of having this quality conversations before we even develop and having everyone's opinions there and saying, Hey, but what if? And that's what I think that testers, anyone doing testing, the superpower is asking the right questions and thinking of failed scenarios, success scenarios and what we're trying to achieve. And if the team does not sit in one room, then everyone works on their assumptions and they start thinking about it only when they get it, when it comes to them, when it reaches the testing status. I will take a look at it and then I'll hit with all my what ifs, even though I have many and I could have said a long time ago.

Elisabeth Hendrickson (16:13):
Well, I think what you're highlighting though is why I value testers and testing skill incredibly highly. There are people who think that I don't because I don't really value a separate independent QA organization, but what you just said about testers are really good at asking questions, that is a superpower and we need that in every job, not just the tester organization, developers asking really good questions there. It's some of the best developers I've ever worked with, the ones who have that tester mindset, product managers, program managers, basically every role there is benefits from that testing mindset, the skepticism and the ability to kind of hone in on the most interesting what ifs scenarios. We need that and not just in a department that has been designated as the designated quality people who think about that stuff for us so we don't have to.

Lina Zubyte (17:09):
Yeah, and quality is such an umbrella term. It's everything.

Elisabeth Hendrickson (17:13):
Absolutely.

Lina Zubyte (17:14):
Clear requirements, it's security, it's performance, and then we have separate roles for other non-functional requirements or cross-functional requirements, which sometimes for me also is a little bit puzzling if I have let's say quality title for me it's like everything.

Elisabeth Hendrickson (17:31):
Yeah, absolutely. And the worst part is quality is everything, and yet the things that we believe we need to do to achieve a quality outcome may or may not be the same things that we need to do to achieve a commercially successful outcome. So if we're in a for-profit company, there is a high potential for a disconnect between the stuff people will pay us more money for and the stuff that somebody with a quality title is thinking about. And that's another feedback loop then that we have a disconnect with because if there isn't a super tight collaboration feedback cycle, information flowing in the organization, the quality people may persist in raising issues that turn out to not actually be relevant for the commercial success of the product that they're working on.

Lina Zubyte (18:23):
I think that's a very good point. And to your previous example, it's one thing to uncover requirements and provide information, but another thing if no one in the team says, actually, you know what? We don't have data for this. It's not our use case. And it could be that you hit the jackpot and it is a very much needed requirement from the user's perspective, but it also could be not useful at all. For me as a QA, I think the biggest discovery was data and using dashboards and trying to quantify the impact of bugs I reporting. It was so powerful that then exactly, I could go into these business conversations with CEOs and C-level cause I could speak their language and understand the idea of quality much better. Because when I ask in the interviews how much testing is enough, there are a lot of people who say Never enough.

(19:19):
You have to stop. You have to understand what is important to you, what actually matters. So there's one thing to have this stubborn, angry testers who are like, oh, this looks not perfect, was not fixed. Maybe it doesn't matter. That's why it wasn't fixed. And if we all maybe pushing this release, but of course it's teamwork. It's not only this person that should have a say, there should be someone helping them and saying, Hey, actually we're not fixing this because of that. Things like that. But yeah, how much testing is enough and then someone saying, oh, we always test, always test. I would test always if you give me all the time, no, it's not a conversation I want to have when it comes to product success.

Elisabeth Hendrickson (20:01):
Yes, exactly.

Lina Zubyte (20:03):
When it comes to testing, a lot of developers I meet say, oh, I don't like testing, I don't like to test. And I get it. It's hard to test things. How could we maybe a little bit change the reputation of it and show that it's fun to do it?

Elisabeth Hendrickson (20:23):
And to be fair, I understand their perspective. When I hear someone say it's not fun, I hear a combination of things. One is not really having experienced the joy of discovery of emergent behavior and how much fun it is to see how this thing that you built where you think you know how it behaves, but now you look at it from the system point of view and you get to see how behaves all the way through not just the parts that you wrote that you think you understand that frankly, I find that fun. I get it that sometimes developers don't find that fun, but the other thing that I hear is also sort of an implied, I need to get onto the next thing. This is not the best use of my time because, and then something about pressure and needing to get onto the next thing.

(21:14):
So in terms of finding it fun, one of the things that I've had success with before is I love pairing. Pairing or ensembling with a group and taking that moment to step back and look at the system as a whole and explore it. And I have converted more than one programmer who said, ah, testing to, oh, this is actually interesting. Oh, this requires me to tap into different problem solving skills in my brain and seeing the intellectual exercise that it's not repetitive, it's not a boring thing of check the box, make sure it did the thing because frankly that's not the best use of any human's time. Make the computer do the automated tests for that kind of thing. Really the exploration and discovery and acting like a user of the system or looking at it through a different lens, helping people to have that experience can help them find the joy in it.

(22:14):
But the flip side is also making sure that from an organization standpoint, first of all, there's enough time nobody feels pressured to cut that short, that discovery short, and second of all, making sure that it is understood that it is part of the developer's job and therefore they get rewarded for it. It's not this thing where they actually get punished if they spend time on it because then they get in trouble for not having quickly enough moved on to the next thing. And instead the organization as a whole systemically rewards everyone for discovery of information. And I don't mean rewards like bonuses, although totally cool if that's that. But just simply thanking people and drawing attention to feeding that which you want to see grow, drawing attention, positive attention to, Hey, look what we learned so that we can celebrate the joy of discovery together as opposed to making people feel like it is a waste of their time and they shouldn't bother because that makes it a very negative kind of cycle.

Lina Zubyte (23:25):
I also like bug bashes or bug parties you could say, where you would also pair up people sometimes when we ran it in one of the companies, we would pair up for example, QA with a salesperson or a developer with a CEO and the way they pair one is the driver, another is the navigator. And then the scenarios they uncover are so interesting because they learn from each other. They see that the salesperson uses it completely differently than the salesperson may see how the QA clicks very differently on the same product. Even any little pairing that I do, I always get surprised because I see how they navigate the product in a completely different way than I do. And I have this assumption that my team, they do the same thing. We all are on the same page and it can be very fun because they pair, they sort of do a mini team bonding exercise because this is getting to know a person you don't normally work with usually, and they uncover bugs that are fairly interesting.

(24:34):
And I always also to sort of provide, not necessarily charters, but similar to charters high level scenario to say, explore this functionality using this, have this in mind. This is what you should keep in mind. And sometimes we would have scenarios like accessibility master, if you want, just go ahead and do something for accessibility. And it's amazing what gets uncovered and we say whatever feels wrong, report it, and that can be a little bit fun. But at one company, what happened was that at some point they were like, oh, we're not that many people. Let's not pair. Let's just do it one by one. And then it gets boring because then when you're on your own you're like, ah, whatever. I'll click a bit, report a few bugs and go away.

Elisabeth Hendrickson (25:20):
Yeah, the learning part, what you're describing with people in different functions, teaming up and learning from each other, that is an example of a feedback cycle that is so important, especially with the people who are customer facing, whether it's support or sales or marketing, who's writing about the product, but the people who are externally facing and they're telling the world about this thing. And then the people who are internally facing, who are working on building it, building those bridges between those two worlds so that it's so tightly intertwined that the people who are working on building the thing have a really good innate sense of how the people who actually talk about it to the outside world talk about it, how they describe it, what they advertise as the features of it, what they say you can and can't do. The number of times I have worked with support where support was telling customers about undocumented features and then we're creating this expectation that those things are going to continue to work the way that they have worked before because now customers are doing things with the product that were, frankly, it was accidental that the product supported those things.

(26:34):
I remember one time shadowing a support call and learning on that support call that the thing that I thought was kind of like a, yeah, the product's capable of that, but nobody uses it and nobody should use it because it's terribly buggy. That was actually central to how support was helping people use the thing. And so it completely reshaped my idea of what's important in this product. So what you're describing by saying, oh, we are too few people, we shouldn't pair up. Everybody work on their own. What a wasted opportunity to learn from each other.

Lina Zubyte (27:12):
I also like this social aspect here because I also was in one company and then they said, what is the reason why people are not upgrading and what can we do so that people upgrade and then someone said, oh, we should convince this colleague of ours. And I was like, why? Because that colleague gets to talk to people and if they have trust in the product, they will say, Hey, upgrade. So this is support, right? Support is telling you this is really good. Upgrade it or support is telling you use it that way. They are so influential and we need to bridge the gap, what they're saying and what we think they are saying.

Elisabeth Hendrickson (27:52):
Yes, absolutely.

Lina Zubyte (27:55):
Coming back to this quality in the title I think is very interesting topic you said, and I'm quoting here: "The engagement end game goes one of two ways. Either we structure things so the organization no longer needs ahead of quality or set the role up for success and find the right person to fill it". Could it be both a head of quality supporting the organization on the way of not needing the head of quality or the head of quality becoming something like an engineering lead? What would you say?

Elisabeth Hendrickson (28:29):
Yeah, I mean I guess so. I have a bias against organizations having a head of quality just in general because I feel like the skills that make someone a very good head of quality would also make them a really good leader in engineering, a really good program manager or head of program management, a really good agile coach if the organization has agile coaches, but somebody who actually gets to be integrated in the process of development all the way through. Now that said, I totally get that some organizations have adopted a coaching and a quality coaching model where the head of quality's job is to help the organization all the way through figure out what does quality mean for us and product management. That's the other thing that I must not forget that those same skills lend themselves to product management as well. Dale Emery, my friend, taught me that politics is the big game of who gets to tell what to do.

(29:33):
And when there is a separate head of quality and product management and program management and engineering management, then we now have increased the amount of politics in the organization. So could there be both? Yes. And I would really love to see that incredibly skilled person in a role that has a very clear defined scope of responsibility where they can bring those skills to the table in a way that really allows them to influence from the very beginning. And if an organization has chosen to have a quality coaching model and that doesn't turn into a big morass of politics, then I guess it's fine. I'm certainly, who am I to tell an organization how to structure their work, but be careful about it turning into a mess where you got a whole bunch of really talented people, all of whom are arguing about who gets to make what decisions.

Lina Zubyte (30:30):
It's very well phrased the politics game because there's more people and there's head of quality. It's almost everything. So you may want to influence certain areas and you may feel like your hands are tied because you cannot really do anything there.

Elisabeth Hendrickson (30:45):
It can be tremendously frustrating for everyone.

Lina Zubyte (30:48):
Exactly. And it's also, I guess it's a good question, if you're interviewing for the role of head of quality, ask the organization what they're expecting. They may be expecting some kind of panacea that you are going to solve all the problems, but you also may not be able to at all.

Elisabeth Hendrickson (31:05):
Wait, we hired somebody with a title quality, aren't they going to fix all the quality problems? That's how it works, right? Name magic.

Lina Zubyte (31:12):
Yeah. On another hand also, some people just see quality as only testing just activity of testing, the status of testing. Even some may say "Oh, it's very traditional, this tester is not just testing this" and then the head of quality with all the skills and understanding of engineering practices starts questioning everything else. That could be very political and very dangerous. In the perfect world, how would not needing the head of quality organization look like? Would there be QAs?

Elisabeth Hendrickson (31:46):
No, and I want to explain that answer in depth. So let me tell you about the most effective organizations that I've seen, and I've seen this at multiple organizations, not only one, but I saw it and led it at Pivotal when I was there where we did not have separate QA. In fact, we did hire people into a role that we called Explorer because we recognized that we needed to bring in people who had that set of analytical skills. Good question askers. And what we learned is that when they had a separate role from developers, so many of the dysfunctions persisted, even though we had a very, very collaborative organization and we were way more successful at producing really high quality results when they were viewed not as that special person does the special thing that we don't really understand. And instead were viewed as team members who just happened to come to the table with a very different set of skills, but they did the same theoretical activities as the other developers on the team.

(32:55):
Basically all of our explorers became developers. But the combination of activities is we were building the software and as we build it at Pivotal, we practiced strict test-driven development at the other organization that I'm thinking of where I also saw this work, well close up, I know other organizations have done this, but I got to see it close up. It was an Atomic Object in Michigan where practicing test-driven development as a developer, before I write a line of code, I have written a test that fails, that shows that I need that line of code and it's really a design methodology, but what it does give me as an artifact is a set of executable tests that will tell me the extent to which the entire code base continues to conform to my expectations of what should be true. So practicing test driven development gives us really high code coverage and a very comprehensive set of usually very fast tests.

(33:53):
The emphasis is going to be on the strictly pure unit tests. So by the time we actually have a build come out the other end of the pipeline, if it's green, we know that the code does what we intended it to do. What we don't know is to what extent does it then exhibit the larger characteristics of what we were aiming for. Product management does acceptance testing because if product management asks for a feature, they are in the best position to say whether or not they got what they asked for. And the team as a whole, which includes everybody who worked on the thing, is constantly exploring to discover risks. And that's that thing that I mentioned before where you take a step back and try it from a completely different perspective, not just running the unit tests. Yeah, we run those all the time, they're green.

(34:46):
Great, but now looking at it through that lens of okay, I am a customer who has to lean on the accessibility features like you mentioned before, I'm a user and I am trying to accomplish a task with it and let me vary the sequences or vary the data, the kinds of things that our unit tests are not going to tell us anything about. In my ideal world, everyone does that, take a step back and look through a different lens periodically, constantly throughout the development cycle. So the combination of we have built the repeatable tests that are automated and run in the pipeline and run locally that tell us if the code does what we expect it to do and we're constantly doing acceptance testing on the things that are getting delivered and exploring to discover the emergent risks that we couldn't have predicted that combination yields practically bulletproof software.

(35:44):
And my favorite example of this was Carlin Fox was a developer years ago, I'm not sure where he is now, but at Atomic Object. And he, on my first day hanging out at Atomic Object, sat me down in front of a thing and said, here, I got to go to a meeting, but just to give you a flavor of the kinds of stuff that we do, I thought it might be fun for you to explore this and tell me all the places where it's broken. And I didn't know him very well, I knew him a little bit and I knew the organization a little bit and I'm like, okay, I'm going to go break some stuff. And he went away for an hour and then he came back and I had to hang my head and say, I could not find anything wrong. I couldn't break this thing to save my life.

(36:26):
It was an application that would run in a kiosk in a hardware store where you bring your photo of your living room and you want to see what it would look like when you paint it pink or whatever. And I tried all kinds of things. You bring it in on like a USB drive. I tried yanking the USB drive early. I tried putting bad data on the USB drive, I tried all kinds of things. I tried all of the different capabilities of the software itself with actual pictures, couldn't find a thing. Carlin comes back, I told him this hanging my head in shame, and he said, what I'm surprised you didn't find. And then he proceeds to show me all of the bugs that he knew about, which included things like, see when I do this with this picture, that pixel there should be that other color and it's stuff that is not going to stop a customer from buying a can of paint.

(37:17):
It is the kind of thing that the person who built the thing might know about, but nobody would say that is a stop ship must fix kind of issue. And when I asked him, well how did y'all build this so that it's so solid? What he described was exactly what I just described. Strict test driven development, constant exploration with somebody who was really good at breaking stuff, who was a member of the team and shared responsibility for exploring not just the one designated explorer. So that's what ideal looks like to me. And again, I can't emphasize enough how much value I place on testing skills, it's just that I think people who are really good at that are also really good at a whole lot of other things. And I want to see those people have more options than being stuck in a role where they're constantly fighting to be heard.

Lina Zubyte (38:12):
So what you're saying is that people who have the QA or a tester role likely should expand their horizons a little bit and see if some other role like product or developer or I dunno, leadership role is interesting to them. Is that correct?

Elisabeth Hendrickson (38:31):
Hundred percent. Even if a given individual wants to stay in a dedicated testing role, considering what those other roles would look like alone is a great exercise in thinking about navigating your career. So it doesn't mean you have to leave testing, but it does mean at least considering how your skills would translate to that is going to open up new horizons in a career even if you end up staying in a dedicated tester role.

Lina Zubyte (39:01):
So the challenge here becomes this, you have been a QA, a tester with this quality title for a few years, then you apply for a new job and then they reject you because you don't have engineering manager title, you don't have product manager title, you don't have developer title. What do you do there? What would you advise to QAs who are trying to go into engineering leadership but struggle with convincing the leadership with their ability?

Elisabeth Hendrickson (39:31):
Well, so within an organization, if you want to move laterally, I think it is your manager's job and it is reasonable for you to expect of your manager that they will help you in navigating your career journey at that company. Because if you are performing well in your role, it is to the organization's best interests to keep you within the organization. You have so much context, so much that is really hard for brand new people to learn. So really just internalizing that idea that it is your manager's job to help you navigate your career in that company that is part of your manager's job. And don't be shy about having that conversation explicitly like, Hey, I'm in this role now I really have an interest in exploring a role in function, whatever that other function is, how can we get me opportunities to explore that?

(40:33):
Totally reasonable to have that conversation. If your manager says, oh, but I need you over here and don't think about going over there, you have a bad manager. And now you get to take kind of the initiative to explore possibilities. I remember having a conversation as a leader once with my HR business partner who had somebody that they were working with to kind of shop around within the company. And so if your manager is a bad manager, you may be able to don't complain about your manager to hr. That will not get you anywhere, but you may be able to, if you have a larger organization to go talk to your HR business partner and express your career interests. There are ways to navigate your current organization that doesn't rely on having a good manager, but don't hesitate to kind of set your expectations with your current manager about what good looks like to you and what you are looking for for them as a manager.

(41:32):
Now let's imagine though that you're applying to companies outside. You don't have somebody who can kind of help you navigate. You're just applying to jobs and you've been in a quality role and now you're applying for, let's say you've been doing test automation, you want to become a programmer. So there, first of all, any company that rejects you out of hand, I'm going to suggest you probably dodged a bullet and it's fine. The trick is to get the interview and you may have to send out a whole lot of applications to get that interview. And then for whatever the role is, I would suggest tapping into your social network. And if you want to go into programming, there's all of the lead coder interviews, which by the way, I hate, I'm not promoting these, but I am saying that if you want to get a job as a programmer, there's known cracking the interview kinds of content out there.

(42:25):
So practice your programming skills and practice for the interview. And similarly, if you want to move into product management, same thing. Also look for ways to be able to move laterally. Even if you don't get the title you were aiming for, maybe you can move into an organization where the quality organization is maybe called something else and has more influence in. So something where you can kind of, again, it's a navigation thing, navigate into the role that you want sideways kind of that would be my recommendation. And just don't let them get you down. If you get rejected, the key is applying one more time than getting rejected. So if you get rejected 300 times, apply the 301st, this is your career. You're going to be at it for a long time. You're going to spend more time at work than you are in any other single aspect of your life except possibly sleeping. So just keep going because you'll get there, you will.

Lina Zubyte (43:29):
So I think it's a good message to also use the social network. I think most of my jobs or opportunities I got were because you meet someone, you talk to them and you realize that some passions click and you have similar mindset and you would love to work together. So that's a great one to also not have to convince-convince people, just talk to each other and be passionate about things, be curious about things, and you will get there eventually. So to wrap up the conversation that we've had here, what is the one piece of advice you would give for building high quality products and teams?

Elisabeth Hendrickson (44:13):
The one piece of advice I think would be pay attention to those feedback cycles. It's going to sound like it's one piece of advice, but doing this is not trivial. So what you want to do is pay attention to how long between the time that we do a thing and we see what actually happened and there's going to be many layers to what happened. But a feedback cycle ultimately at the end of the day is I do a thing and then I get to see what happened and I do a thing and I get to see was the build, the CI build? Was that green or red? And then there's did the product manager accept or reject the story that I thought I'd delivered? And then there's to support believe in it like you said. And then there's ultimately do the customers value it and what does sales say?

(44:59):
So there are many layers to this, but paying attention to all of those feedback cycles and then figuring out how do we shrink the time from do a thing to see what happens, how do we make that the smallest possible interval? So that's one aspect of seeing to the care and feeding of your feedback cycles. The other one is keeping the feedback cycles from getting polluted. And by polluted I mean like QA is injecting work sideways in the organization because although they're well intentioned, they're disconnected from what the actual criteria that would make the company more successful, they're disconnected from that. So that's an example of pollution in a feedback cycle. Also, flaky tests in the build is an example of pollution in the feedback cycle. It's a pollution in a feedback cycle is anything that makes you not trust the information that you're getting.

(45:53):
And so seeking out and removing the sources of that pollution, which may be as simple as let's all get into a room and get aligned on what success looks like. It doesn't mean remove roles or people or that is the giant hamfisted way. The better way is to say, let's all get aligned on what success looks like and if it's flaky tests, let's actually take the time to clean up the test suite so that we trust it. When it's green, we trust it. And when it's tells us there's a problem, we don't just say kick the build again because it will turn green this time. For sure. In a nutshell, that's it. See, to the care and feeding of your feedback cycles, make them super tight, super short and super clean.

Lina Zubyte (46:34):
Lovely. Thank you so much, Elisabeth. I really liked our conversation.

Elisabeth Hendrickson (46:39):
Ditto. Thank you so much. It was such a joy to talk to you. I really appreciate it.

Lina Zubyte (46:44):
That's it for today's episode. Thank you so much for listening. I hope you enjoyed this conversation as much as I did. It definitely had lots of great thoughts. If you liked it, please tell your friends about it, subscribe to the podcast, share it, and provide some feedback. And until the next time, do not forget to continue caring about and building those high quality products and teams. Bye.