Quality Bits

Testing Better: Breaking Down E2E tests with Bas Dijkstra

March 19, 2024 Lina Zubyte Season 2 Episode 15
Quality Bits
Testing Better: Breaking Down E2E tests with Bas Dijkstra
Show Notes Transcript

Automated E2E (end-to-end) tests frequently are the first thing companies introduce trying to reduce the efforts of manual testing: "There's this big manual testing regression suite, so why don't we automate it? Right? ... Right?"

Well, Bas Dijkstra thinks otherwise. In this episode, we'll talk about automating the right things instead, and, why E2E tests are not always the best idea. You'll hear about test models, test ownership, and where to start when you want to learn more about automation in a bit less common way than most of testing courses recommend.

Find Bas on:

Mentions and resources:

Follow Quality Bits host Lina Zubyte on:


Follow Quality Bits on your favorite listening platform and Twitter:
https://twitter.com/qualitybitstech to stay updated with future content.

If you like this podcast and would like to support its making, feel free to buy me a coffee:
https://www.buymeacoffee.com/linazubyte

Thank you for listening! ✨

00:00:05 Lina Zubyte 

Hi everyone, welcome to Quality Bits, a podcast about building high quality products and teams. I'm your host Lina Zubyte. Automated end to end tests is the topic that I may have unpopular opinions about, which are that: they are just a cherry on top. More often than not, they're more harmful than great for us. So naturally, when I saw Bas Dijkstra talk about removing e2e tests or breaking them down, I was intrigued and I went to read his articles and I had to get him on this podcast. In this episode, we're talking about e2e tests (automated ones): what they are, and you will hear some tips and tricks how to break them down as well as where to start when you want to learn more about better automation efforts. Enjoy this conversation. 

00:01:19 

Hello, Bas, welcome to Quality Bits. 

00:01:22 Bas Dijkstra 

Good morning, Lina. Thank you for having me. 

00:01:25 Lina Zubyte 

Could you shortly introduce yourself? 

00:01:28 Bas Dijkstra 

Sure, my name is Bas Dijkstra. I am a tester, automation consultant, trainer. I live in the Netherlands. I've been doing this for 17 years now. Most of the time I've been an engineer, so individual contributor, as part of a development team. But for the last couple of years I've been mostly focusing on consulting, training, a bit of public speaking, all kinds of different fun and interesting things really. 

00:01:58 Lina Zubyte 

I have seen your name pop up in my LinkedIn feed time to time. There will be some interesting posts and one of the most recent posts.... I really enjoyed it... was about E2E tests. Because I feel like E2E tests as our bread and butter of many testers or QAs. However, the more experience we gain we may realise at some point that maybe it's not the best starting point and there are other types of tests as well. In one of your posts as well, you were encouraging people not to have that many of E2E tests. How was your career, though? Where did you start? Were you already not writing E2E tests? Because usually when QAs start learning testing or automation in general, they start from this top level, which maybe is not that best of a choice. But how was your journey? Did you also start from that level? 

00:02:51 Bas Dijkstra 

Of course I made all the mistakes because that's how we learn. I'm so happy that you said, well, we're focusing on automation here because again, terminology is confusing sometimes. That post was specifically about automation and about how we approach automation, but I definitely made all the mistakes there myself. Literally my first projects were like we have a bunch of regression tests that we've been doing manually for years. We don't want to do that anymore because it's boring and it takes a long time. Please sit in the corner. Don't talk too much, but automate all of that for us and just replicate everything that's written down in those end to end tests through code. By means of using code. 

00:03:41 

And as you can imagine, and as I've learned over the years... It took me an embarrassingly long time to learn that by the way, but absolutely that's probably the worst way to approach your automation. But I've definitely done that, and progress was even measured by the number of scripts that we automated. So we had this big Excel sheet where all the other Excel sheets were referenced in and it was basically how many of these are running green because that's what's important, right? Three nights in a row because that means they're done and we can check them off and we can move on to the next. It's not the most efficient way. We're talking about 2006, 2007 years, so a couple of years ago. Maybe more than a couple. That's how we looked at automation back then as a means of executing your regression test scripts more efficiently, which in itself is not a bad idea. It's just the approach that we took is... well, suboptimal. 

00:04:49 Lina Zubyte 

It would be fun, you know, to change the assertions to the opposite and see if they're actually green still. probably were. 

00:04:55 Bas Dijkstra 

They probably were. 

00:04:57 Lina Zubyte 

You could rig the metric, right? 

00:04:59 Bas Dijkstra 

It's a system that can be... A metric that can be gamed very easily, yes. And over the years, I've slowly learned how to do better. I'm still learning new things and learning how to do better every day, so I'm pretty sure there's a lot of stuff that I still don't know, but the problem is I still see. So again, as a consultant, I'm in the fortunate position that I get to meet with a lot of people, talk to a lot of people, speak with a lot of teams and organisations. And unfortunately, there's still a lot of teams that have that same approach that we took in 2006, 2007 to automation say, well, we have a bunch of end to end regression test scripts. We think they're boring and we want to automate them. Can you help us automate those scripts? 

00:05:51 

It's a bit of why haven't we learned this lesson? Still? Maybe a bit of frustration even that got me to write that post, but it's not like I came up with the idea because plenty of much more experienced people have talked about this before. It's not like this had suddenly come to me in some kind of epiphany or something. It it's just writing down, summarising a lot of the thoughts I've been having and things I've been talking about over the last couple of years. 

00:06:23 Lina Zubyte 

I think with years we learn better ways of the how, because in a sense, when you get the task: automate these boring manual test cases. First thing you could think of: OK, I can do it on my own. Then I don't need to bother anyone. I don't need to change the system I can just do it by myself. However, the more we learn we realise, OK, there could be a different way of how to automate because you could automate it in a different way, right? So what we're not trying to say is that automation is bad. 

00:06:54 Bas Dijkstra 

No, not at all. 

00:06:55 Lina Zubyte 

We still should do it, however, maybe we could do it a little bit differently. Yet before we get into this, maybe it would be important to describe what is an end to end test. How would you describe it and how does it differ from other types of testing? 

00:07:11 Bas Dijkstra 

So it depends. Here's what you can tell I've been a consultant for a while now. There's actually two definitions or maybe 3 definitions of end to end tests that we could work with here. They all apply to some extent, so one of them is what I call vertical end to end. So a test that exercises all the layers of an application all the way from the UI at the top of my application technology stack architecture layer whatever, all the way down to where the data is stored and back. So that's one - that's the vertical end to end. The horizontal end to end is a test that captures the entire life cycle of data as it moves through different applications and systems. And then we have the sort of end to end test that combines those two really which makes it even bigger in scope. 

00:08:16 

Most of the tests that I see and that I've worked with in the past, and definitely those where I started with all those years ago, they were doing both. So they touched sometimes 5/6/7 different applications and all of them from an end user, full stack UI driven perspective really. So pick any of those three and it's an end to end test, but most of the time it was a combination of horizontal and vertical. 

00:08:48 Lina Zubyte 

There are different test models. There's test pyramid, there's a honeycomb, there's Swiss cheese model. When we have all kinds of different tests there: not only end to end tests. So we may have unit tests, integration tests, contract tests (and I know you're talking about those as well and helping people learn more about that). Sort of the model in which we would have the most amount of end to end test cases is you could say an ice cream cone because we just have a lot of E2E tests on top. It's still quite common, I think, in many companies. I don't know how you see it in the projects you're working with. Is it an ice cream cone? What's the most popular? 

00:09:28 Bas Dijkstra 

I'm not sure popular is the right word, because everybody hates it when they have something like that. I don't know. And in all honesty, I don't really care. Why people like to talk about these models and these shapes and this shape is right and that shape is wrong. The problem is what? Those models don't take any kind of context into account. Again, I don't really care about what the distribution of types of tests in your overall test suite looks like really. Maybe that looks like an upside down: it's not a pyramid, it's a triangle. An upside down triangle or a triangle pointing up or a honeycomb or an hourglass, or an ice cream cone, or just a square. I don't really care. 

00:10:21 

Because a lot of it depends on context. And context is a very broad concept of course. So technology stack comes into play, but also skillsets and the tools used and the architecture and testability automated of your application. So the most important thing I think is I still like to talk about that model because everybody knows it, everybody hates it. For me it's a great way to start a discussion about what's important in the end is making your tests as small as possible, but not smaller. There's definitely a thing as a test that's too small. But for me it's a conversation starter. What are you doing with your tests? What are your tests? What's the scope of your tests right now? How's that working for you and what is it that you're really testing? Do you really need all those components, all those layers in your test? What can we do to make our tests a little bit smaller? Because again, the smaller your test get, the cheaper they are to write and to run. The easier it is to.... 

00:11:26 

Debug them to analyse them in case of things are run faster and basically, which is why we're doing automation in the 1st place: the faster we get feedback about the current state of our code and our systems. And whatever ratio between the different scopes of tests and even those scopes are blurry. Especially with the integration tests, you probably recognise that if I ask five people for a definition of the integration test, I get six different definitions, at least. For me, the most important thing is how do we get those tests to be the right size? Not too big. Not too small and whatever kind of ratio we end up with in the end I don't think that's really important. 

00:12:14 Lina Zubyte 

Talking about e2e tests or those heaviest, slowest, most expensive tests very frequently have you worked in the projects where you help break down them and could you share an example of how did it go? Where do you even start when you see the big suite of e2e tests? 

00:12:35 Bas Dijkstra 

Yes, it's not always as easy as that hypothetical example that I gave in the post -that sort of kind of triggered this conversation. Absolutely not. Real life is often much more difficult than these situations, and a nice little diagram. And oh, you should do this and then magically everything is fine. Unfortunately, real life doesn't work that way, but just the first thing I typically start with is when I see people writing these you UI driven, full stack or end to end tests is checking: am I repeating the same path through my application multiple times? So a quick example from a number of years ago... 

00:13:17  

Where a fellow test automation engineer, to the best of their experience and knowledge, so I don't blame them because, well, I've been there as well, wrote tests for an online mortgage calculator. So that was like a form with seven different steps and at the end you get some kind of feedback about yes, you can apply for a mortgage with us or I'm sorry. I'm afraid we can't help you. And there were all these different variables in each of those form steps. So first they automated the single path with a single set of values through all of those scripts. Which isn't a bad start, so maybe a you can improve that, but as an example of an end to end test, it's not that bad.  

00:14:15 

It's demonstrating that an end user can open a browser and go through that entire form. And get feedback about the potential result from the screen, so that's where they started with. Then they thought: Hmm, why don't we reuse that end to end test? And executed. I think they had about 25 iterations just with different sets of data, but that changes the scope of the test because now we're no longer verifying can I as an end user, go through all of these forms and do I get the result that I expect to see back on the screen. I'm now testing the business logic that's behind it. And unless your application is very, very, very poorly written and architected, your graphical user interface has no role to play there. 

00:15:12  

So the first thing we do, a very simple step is just see with every form step, let's see what happens if we click that next button and turns out it was a very simple API call that sends all of the data for that form step to the back end and that's when we said... Hmm. Maybe we should just remove the UI from the equation and start replicating or replaying those API interactions because what we're testing here is not the end user experience, it's the business logic that determines whether or not someone was applicable to apply for a mortgage. So that's a very, very common first step. Am I verifying that an end user can do something here, or am I testing back end logic? And am I just using the UI as the interface to do that? 

00:16:11 Lina Zubyte 

How did the test engineer there react? Or what are the roles that can help out in this breakdown? So first of all, there were e2e tests and then you said, OK, let's have API tests. Who writes those tests? 

00:16:27 Bas Dijkstra 

The team... which is a useless answer. My next answer is going to be it depends, which is equally useless. I know that, but it really does. Often it will be someone who has something like test automation engineer or SDETor quality assurance engineer, enterprise technical, whatever in their LinkedIn Bio, it's someone who writes code to help support testing. In some teams, that will be just a tester. I don't mean that in any kind of derogatory way, just the tester, that will be the tester. Sometimes there will be someone to call themselves automation, sometimes it will be the developers and sometimes, I think that that's actually the best way to do it - that's why I started my answer with the team - is when people do that together. 

00:17:30 

Because you're writing code to help you do testing more efficiently, and that's the interesting and sometimes confusing position that that automation is in, right? So we're doing software development but we do that to support our testing, which means that it's a little bit of a schizophrenic kind of task. Because to be good at it, you need to be a decent software developer. But also understand what good testing looks like and what the role or the designation or what the business card says for the person who's going to do that I don't really care. What I care about is that it's being done and that it's being done in a proper way and in some teams organisations it will be developers. Sometimes it will be testers, sometimes hopefully it will be those two roles working together because I think lots of interesting and beautiful things can happen if people work together. 

00:18:36 Lina Zubyte 

I agree. I think the answer I would see in my head is as well the whole team. The team actually owns this because testing is a part of development. They should do that. However, if there already was this dysfunction, you could say... of siloed people. OK, you go to your corner and write this e2e suite for us to run. And then someone comes in and says, “hey, break down those e2e tests” that person. Maybe like, Oh my gosh, what's going on. However, on the other hand, the team may be like, hey, you know, there's the tester person that does this. So it's like we tend to categorise people and things and tasks into certain boxes. 

00:19:16 

Breaking down the e2e tests I think is a wonderful exercise because you may also find out it's not only API tests. It could be a contract test that you need and if it's contract test then you may need to implement it in the whole company so you don't really just implement for one team. On another hand it could be a unit test. We should be a part of developer's work, but maybe they don't know how to do it. Maybe they don't have any unit tests which could be like even a bigger kind of shock to find out and I wonder how to tackle these kind of situations, because I think as easy breaking down the e2e tests sounds - it could affect the whole system and the whole company. 

00:20:00 Bas Dijkstra 

It very likely will. It's not a short process, it will take time and because it involves people and the way they do their work and the way they see their work - there will be friction and it will take time, but it's taking small steps. Don't expect to change everything overnight because that never works. It's trying to do a little bit, just a little bit better every day. If you keep doing that and I know there's all these cliches and these memes on LinkedIn about just, but there's definitely some truth behind that, it's just: see what can we do today to make it just a little bit better. Sometimes that's as easy as rewriting or refactoring a couple of tests. Sometimes that's having a conversation with the developer about... 

00:20:56 

What can we do in our application code to make it a little bit easier to make our tests smaller? And how can we improve testability, automatability of the code or our application in general? Or just, well, hmm, why don't we try out this for a week or two weeks or however long it takes and see what happens, and see if it helps us? Because I think this might be beneficial for us. Most people are willing to do it. It's easier, actually. It takes a little bit longer, but it's easy if you do things in small steps instead of coming in and saying well, from today onwards, we're going to work like this because what we've been doing for the last 10 years is crap. Because nobody's going to listen to that. It's much more about taking small steps and showing lots of little bits of value. 

00:21:54 Lina Zubyte 

So let's say we're taking those small steps, we're improving. We're breaking down the E2E tests and we're testing the right things the right way. And then is there ever a situation that you could imagine there's no E2E tests and in what cases that would be? 

00:22:12 Bas Dijkstra 

That's a very interesting question. There will always be end to end testing. This comes all the way back to the start of our conversation where we said are we going to talk about testing or are we going to talk about automation? There will always be end to end testing. I think it's always a good idea to have a human being try out your application for a bit. And if you're careful and diligent, that's a tester before you put it into production. If you say well, I just want to ship it, your users are your testers basically, but at some point people are going to interact with your application and/or your code, your system and form an opinion about the quality of your product or of the latest build or whatever. Maybe that's a tester. If you do continuous deployment 10 times a day like all the Netflix and the Amazons or whatever - your end user is that tester. That's not going to go away. 

00:23:18 

For the automation part, in most cases, you'll probably still have a couple of them. But in a lot of situations you can do with far fewer than you may typically will. So you'll still need some - “need” is a big word here - but there's a lot of situations where having some of these tests can still be very beneficial, but only after you've done all the smaller scope tests that are easier to write, faster to run, all those things. Completely removing your end to end test - it's a trade off really. It’s that once you've done pretty much everything and I don't know if you've ever get to that stage where you can say, well, we've done everything and we're done now and we really have a minimal set of end to end tests, then it's up to you to say, well, are we still going to invest all the time... 

00:24:17 

And the effort and the therefore the money to maintain that far smaller suite of end to end tests? What value are they still providing? Or are we going to say: “well... let's just have a tester or someone else or product owner or whoever with every build or every release go through the application very quickly because that's just more efficient”? And as you know, you would see a lot of things that automation doesn't. So again, I hate it, but it depends really. But I think it's a very good idea in general to just have a look at your current set of end to end tests to see, well, does every test that we're doing really have to be an end to end test? And there's a good chance that the answer is no. 

00:25:09 Lina Zubyte 

I see end to end tests - automated end to end tests, as a cherry on top, right? It should not be the first thing we do, but it can give some useful information. But even if we don't have it sometimes in the project I wouldn't see it as a deal breaker in a sense. I've worked on projects where we added it almost last because we would do this manual check and then we would get this kind of feedback because anyways, when we added, it was just two journeys or three that we added. 

00:25:40 Bas Dijkstra 

Yeah. No, that's what I tried to say it. Just you said it way more eloquently, but yeah, that's what it is. You do all the other things first that give you fast feedback, very focused, targeted tests, checks, inspections, whatever and then at the end you decide well, do I want to place, I like that, cherry on top? Am I going to go through the additional effort that it takes me to create that cherry and put it on top? Or am I going to say “well, I don't think we really need that anymore because a human being is looking at our application anyway before it's shipped”? And I think that's a very healthy conversation to have. Is it still worth it to do that? Because sure, maybe some of those things can be automated. Doesn't mean that it's worth our time and effort and money, absolutely. 

00:26:45 Lina Zubyte 

However, the reality is we still have lots of e2e tests in many companies. 

00:26:49 Bas Dijkstra 

Oh yes. 

00:26:50 Lina Zubyte 

And your article breaking down the end to end tests was really nice. I really enjoyed reading it. One thing that I've noticed there that I kept smiling to myself about was hey, diagrams, it is something that I truly treasure, and I think that's what I do as well as a QA if I join the team: I visualise the architecture. I draw the diagram and then ask questions on where and what is being tested. And with years in testing, I'm like, OK, there actually is a huge link between testing and architecture, which first sounded scary to me and I was like, what do I have to do with that? What are your opinions with architecture, testing, diagrams? How does that help us? 

00:27:35 Bas Dijkstra 

Especially in automation - a lot, because systems, any kind of reasonably complex system has many different interfaces. An interface can be an API, it can be a database, it can be the code, it can be a queue. And it could also be that graphical user interface. Lots of people start, and including myself again, start their automation journey by replicating what they're used to doing as a human being: by means of automation, is replicating the Internet, the way they interact with an application, with code. The problem is that graphical user interface is the only interface in your application that's specifically written to be consumed by human beings, so it's optimised for consumption by human beings. All the other interfaces in your application are optimised to be consumed by other pieces of code, which makes them just by nature way easier to automate. But unless you have a good grasp of the architecture of the system and knowing about those interfaces, what they look like, which component is responsible for what part of the overall process? It's going to be very hard to pinpoint, to scope down your tests. 

00:29:05 

Because if all you know is a graphical user interface, you're going to do all your testing and all your automation through that graphical user interface. And again, visualising and talking about, just looking at repositories or pieces of code or whatever, that's often very difficult for people to understand. So like you, I am a big fan of making things visual. Say, well, we've got these components, this is where this part of the process happens, and that's important for us to test now, so why don't we scope our tests to those components? And including those interfaces and ignoring or mocking out depending on testability and context, mocking and, or just ignoring, forgetting, or putting the rest out of the scope for our test? Again visualisation and diagrams help so much. Even to make people who aren't maybe that experienced or that comfortable opening the hood of our application to saying, well, what's going on under that hood? Because again, automation works so much more efficiently when you target those components under the hood individually instead of doing everything from an end user perspective. 

00:30:32 Lina Zubyte 

That was such a good point about graphical interface and that that's what we're used to very frequently. In some of the interviews I conducted for QAs I had an exercise, so, for example, if there was a senior person, at the very end I would show them the diagram of architecture. Guess what was the result? Most people would be freaking out. They never saw it because they were so used to just graphical interface. That's what they knew. And when I will start asking. OK, how do you test this part or this part? They're like, oh, they're like “I’d do an end to end test again”. 

00:31:08 Bas Dijkstra 

It's also why I struggle a bit, and I'm trying to do my little part in improving that is: how we teach testers about automation. The first thing you start with in many different courses is: we start with Selenium or Cypress or Playwright. To some extent, I understand it because that's what people can relate to and that's what they've seen, because they're used to graphical user interfaces, so it makes sense. It's also why I sometimes struggle with a lot of these, and I don't mean to turn this into a low code automation bashing exercise because it's not, because there are absolutely fantastic tools in that space. But a lot of them focus on graphical user interfaces as well. 

00:32:06 

And some of them you can basically summarise what (not all, but many of them) do is creating very inefficient tests more efficiently. But the same is true for the, again, the way we teach and talk about automation in general. So many people started automation journey with the Selenium or the Playwrights or the Cypresses, ignoring the fact that with Cypress and Playwright you can do all kinds of funny stuff under the hood as well, but the UI kind of automation. And when I tell people that's with reasonable margin, the most complex and difficult and expensive and hard to do kind of automation, that is, they say “ohhh, what?” And then you show them that writing unit tests is actually the easiest part of automation because they are small, cheap to write, but people have this notion of the tests that don't go through your subcutaneous API integration unit kinds of tests...  

00:33:07 

That it's difficult or hard or a developer's job, maybe. But when you look at it a little bit deeper and this is why I love testing at the API level so much because you get this balance between: a) you're testing a lot of your components as a whole, so you're much closer to the end user experience and to what's important to the end user than with unit testing. But from an automation perspective, it's so much easier than going through a graphical user interface. Again, because you're working with an interface that's been designed to be consumed by computers in the first place. 

00:33:49 Lina Zubyte 

I do think this is sort of revolutionising the way we teach testing and I'm very grateful to you for actually sharing these thoughts and making people think and question things that maybe you know, the way we started is not the best way. And in your post about breaking down the automated end to end tests you wrote: “Doing this also requires learning more about automation than just knowing how to wield the UI driven automation tool, and you should. In fact, that's where your learning journey should start if you ask me.” So where do we actually start? What resources would you recommend here? What can we learn? 

00:34:29 Bas Dijkstra 

First of all, I think it helps if you know again more about architecture, APIs, those kinds of interfaces. Learn programming. That helps a lot. Just good old fashioned object oriented programming principles. Even if you're starting to do things with local tools, it still helps you because object oriented programming is not about syntax or optimising if statements or whatever. It's about thinking about your code in a modular, reusable way. The developers are doing that, hopefully, most of them are or at least they say they are. And it the same is true for testing and automation. If you want to write good automation, it helps if you understand those concepts. 

00:35:28 

And then I think that people could start with unit testing and learn more about unit testing. That doesn't mean they should take the task of unit testing out of the hands of the developers because again still the developer should do it, but having the ability to have a conversation with your developers about “oh, what do you do when you write your unit tests? What are you covering? What are you not covering? Can we add a little bit of coverage here? Because for me, with my and from a tester perspective, this is important information and we can capture that and maybe in the unit test.” So that's where I would love to see more people start their journey because yes, that is for many people a big step. I know that it takes more time than just pulling a UI automation tool from the shelf. Again if that's a commercial tool or just open source libraries, it doesn't really matter. In the short term, it's easier because you work with interfaces again that are familiar to you, but again, what you need to keep in mind is that this should be the cherry on top. Again, as you so nicely put it, this should not be our starting point. This should be what we finish with when we've built our cake. Our automation cake or whatever. I don't know. Let's stop talking about pyramids and start talking about cakes. 

00:36:58 Lina Zubyte 

Yeah, automation cakes. I approve. I like cake. 

00:37:01 Bas Dijkstra 

Yes. Who doesn't? Who doesn't like cake? 

00:37:05 Lina Zubyte 

I think there are people who don’t. 

00:37:07 Bas Dijkstra 

Well, not many. There's definitely more people who don't really like talking about automation pyramids, but that's what I'd start with learning a little bit more about that. And there's so much good content out there in that area as well. Learn about unit testing, unit testing frameworks, what unit testing looks like. How to do it? Because it is actually much easier than writing an end to end test. You can have people writing, admittedly, very basic unit tests in like 10-15 minutes, given that they're comfortable writing a bit of code. With these UI tests, there's so much more plumbing that needs to go on before, so much more scaffolding you need to do and a big part of that is adding a test runner unit testing framework to your solution anyway, so why not start there and then pull in other kinds of libraries and things to do the other kinds of testing, but start at the bottom instead of starting with the cherry and then trying to fit that cake underneath all your cherries. 

00:38:20 Lina Zubyte 

We will add a few of the resources, at least your article for sure, maybe a little bit of more things to the episode notes. So to wrap up this conversation and the topics we've touched on here, what is your one piece of advice for building high quality products and teams? 

00:38:39 Bas Dijkstra 

That's a big question, Lina. My one piece of advice is - have more conversations. Really. I think in general we're doing much better than we did, say, 10-15 years ago where the developers did the developing and testers did all the testing. There was this wall behind. But still I think there's so much to gain if we work together a little bit more closely as a developer with the tester and that requires change and some additional skills, maybe from both sides really, but it makes things so much better in the end because first of all, two people seem more than one and each person comes to the table with their own experience and their skills. 

00:39:31 

You get instant feedback and instant learning, so I would love to see people do more of that, especially now that so many of us are working remotely. It's harder to do it that way, but it's maybe even more important because it is unfortunately very easy to just sit in your corner in your kitchen, in you bed, wherever it is you're working from, and just do your thing and don't really work together. And there are teams that definitely get that right: they do a lot of pairing and a lot of working together in in all different kinds of ways, shapes and forms. But I would love to see more people do that or just try that. And that can be as easy as just striking up a conversation with a fellow tester or a developer about “hey, I see you're working about this as this relates to what I'm going to do in a moment because I have to test what you are doing... 

00:40:34 

How about we work together on this and we do the development and the testing in sync, together and then see?” And a lot of the times it will help both because the developer might learn a little bit about what that tester actually does and how they look at a piece of code or a feature in the same way the tester learns a little bit about what the development actually maybe becomes a little bit more comfortable with the code and the way the code works to say “well, hmm, maybe we can retrieve that piece of information that I'll be looking for when I do the testing anyway in a unit or integration test here and make the process a little bit more efficient in that way.” 

00:41:19 Lina Zubyte 

Wonderful. Thank you so much for this conversation. 

00:41:22 Bas Dijkstra 

Thank you for having me, Lina. It was my pleasure. 

00:41:24 Lina Zubyte 

That's it for today's episode. Thank you so much for listening. Check out the episode notes for any resources you may be interested in. Subscribe, share this episode with others, and until the next time do not forget to continue caring about and building those high quality products and teams. Bye.