Thursday, April 20, 2017

Dear Developer

Dear Developer,

I'm not sure if I should write you to thank you for how enthusiastically you welcome feedback on what you've been working on and how our system behaves, or if I should write you to ask you to understand that is what I do: provide you actionable feedback so that we can be more awesome together.

But at least I want to reach out to ask for you to make my job of helping you easier. Keep me posted on what you're doing and thinking, and I can help you crystallize what threats there might be to the value you're providing and find ways to work with you to have the information available when it is the most useful. What I do isn't magic (just as what you do isn't magic) but it's different. I'm happy to show you how I think well around a software system whenever you want to. Let's pair, just give me a hint and I make the time for you.

You've probably heard of unit tests, and you know how to get your own hands on the software you've just generated. You tested it yourself, you say. So why should you care about a second pair of eyes?

You might think of testing as confirming what ever you think you already know. But there's other information too: there are things you think you knew but were wrong. And there are things you just did not know to know, and spending time with what you've implemented will reveal that information. It could be revealed to you too, but having someone else there, a second pair of eyes, widens the perspectives available to you and can make the two of you together more productive.

Us tester tend to have this skill of hearing the software speak to us, and hinting on problems. We are also often equipped with an analytic mind to identify things you can change that might make a difference, and a patience to try various angles to seeing if things are as they should be.  We focus our energies a little differently.
 
When the software works and provides the value it is supposed to, you will be praised. And when it doesn't work, you'll be the one working late nights and stressing on  the fixes. Let us help you get to praise and avoid the stress of long nights. 

You'll rather know and prepare. That's what we're here for. To help you consider perspectives that are hard to keep track of when you're focused on getting the implementation right.

Thank you for being awesome. And being more awesome together with me.

     Maaret - a tester

Time bombs in products

My desk is covered with post-it notes of things that I'm processing, and today, I seem to have taken a liking to doodling pictures of little bombs. My artistic talent did not allow me to post one here, but just speaking about it lets you know what I think of. I think of things that could be considered time bombs in our products, and ways to better speak of them.

There's one easy and obvious category of time bombs while working in a security company, and that is vulnerabilities. These typically have a few different parts in their life. There's the time when no one knows of them (that we know of). Then there's the time when we know of them but other don't (that we know of). Then there's the time when someone other than us knows of them and we know they know. When that time arrives, it really no longer matters much if we knew before or not, but fixing commences, stopping everything else. And there's times when we know, and let others know as there is an external mitigation / monitoring that people could do to keep themselves safe. We work hard to fix things we know of, before others know of them because working without an external schedule pressure is just so much nicer. And it is really the right  thing to do. The right thing isn't always easy and I love the intensity of analysis and discussions vulnerability related information causes here. It reminds me of the other places where the vulnerabilities were time bombs we just closed eyes on, and even publishing them wouldn't make assessing them a priority without a customer escalation.

Security issues, however, are not the only time bombs we have. Other relevant bugs are the same too. And with other relevant bugs, the question of timing sometimes becomes harder. For things that are just as easy to fix while in production and while developing an increment, timing can become irrelevant. This is what a lot of the continuous deployment approaches rely on - fast fixing. Some of these bugs though, when found have already caused a significant damage. Half of a database is corrupted. Communication between client and server has become irrecoverable. Computer fails to start unless you know how to go in through bios and hack registries so that starting up is again possible. So bugs with impacts other than inconvenience are ones that can bring a business down or slow it to a halt.

There's also the time bombs of bugs that are just hard to fix. At some point, someone gets annoyed enough with a slow website, and you've known for years it's a major architectural change to fix that one.

A thing that seems common with time bombs is that they are missing good conversations. The good conversations tends to lead to the right direction on deciding which ones we really need to invest on, right now. And for those not now, what is the time for them?

And all of this after we've done all we can to avoid having any in the first place. 


Wednesday, April 19, 2017

Test Communication Grumpiness

I've been having the time of my life exploratory testing a new feature, one that I won't be writing details on. I have the time of my life because I feel this is what I'm meant to do as a tester. The product (and people doing it) are better because I exist.

It's not all fun and happy though. I really don't like the fact that yet again, the feedback that I'm delivering happens later than it could. Then again, as per ability, interest and knowledge to react to it, it feels very timely.

There's three main things on the "life of this feature". First it was programmed (and unit tested, and tested extensively by the developer). Then some system test automation was added to it. I'm involved in the third part of its life, exploring it to find out what it is and should be from another perspective.

As first and second parts were done, people were quick to communicate it was "done". And if the system test automation was more extensive than it is, it could actually be done. But it isn't.

The third part has revealed functionalities we seem to have but don't. Some we forgot to implement, as there was still an open question regarding them. It has revealed inconsistencies and dependencies. And in particular, it has revealed cases where the software as we implemented isn't just complicated enough for the problem it is supposed to be helping with.

I appreciate how openly people welcome the feedback, and how actively things get changed as the feedback emerges. But all of this still leaves me a little grumpy on how hard communication can be.

There are tasks that we know of, like knowing we need to implement a feature for it to work.
There are tasks that we know will tell us of the tasks we don't know of, like testing of feature.
And there are the tasks that we don't know of yet but they will  be there.

And we won't be done before we've addressed also the work we just can't plan for.

Wednesday, March 29, 2017

Test Planning Workshop has Changed

I work on a system with five immediate teams, and at least another ten I don't care to count due to organizational structures. We had a need of some test planning for the five immediate teams. So the usual happens: a calendar request to get people together for a test planning workshop.

I knew we had three major areas where programmer work is split in interesting (complicated) ways across the teams. I was pretty sure we'd easily see the testing each of us would do through the lenses of responding to whatever the programmers were doing. That is, if one of our programmers would create a component, we would test that component. But integrating those components with their neighbors and eventually into the overall flows of the system, that was no longer obvious. This is a problem I find that not all programmers in multi-team agile understand, and the testing of a component gets easily focused on whatever the public interface of the team's component is.

As the meeting started, I took a step back and looked at how the discussion emerged. First, there was a rough architectural picture drawn on the whiteboard. Then arrows emerged in explanation of comparing how the *test automation system* works before the changes we are now introducing - a little lesson of history to frame the discussion. And from there, we all together very organically talked on chains and pairs and split *implementation work* to teams.

No one mentioned exploratory testing. I didn't either. I could see some of it happening while creating the automation. I could see some of it not happening while creating the automation, but that I would rather have people focus on it after the automation existed, I could see some of it, the early parts of it as things I would personally do to figure out what I didn't yet even know to focus on as a task or a risk.

Thinking back 10 years on time before automation was useful and extensive, this same meeting happened in such a different way. We would agree on who leads each feature's testing effort, and whoever would lead would generate ways for the rest of us to participate in that shared activity.

These days, we first build the system to test the system, explore while building it and then explore some more. Before, we used to build a system of mainly exploration, and tracking the part that stays was more difficult.

The test automation system isn't perfect. But the artifact that we, the five teams, can all go to and see in action, changes the way we communicate on the basics.

The world of testing has changed. And it has changed for the better.

Tuesday, March 28, 2017

World-changing incrementalism

As many exploratory testers do, I keep going back to thinking about the role of programming in the field of testing. At this point of my career, I identify both as a tester and a developer and while I love exploratory testing, maintainable code comes close. I'm fascinated by collaboration and skills, and how we build these skills, realizing there are many paths to greatness.

I recognize that in my personal skills and professional growth path there have been things that really make me more proficient but also things that keep me engaged and committed. Pushing me to do things I don't self-opt-in is a great way of not keeping me engaged and committed, and I realize, in hindsight that code for a long time had that status for me.

Here's still an idea I believe in: it is good to specialize in the first five years, and generalize later on. And whether it is good or not, it is the reality of how people cope with learning things, taking a few at a time, practicing and getting better, having a foundation that sticks around when building more on it.

If it is true that we are in a profession that doubles in size every five years, it means that in a balanced group half of us have less than five years of experience. Instead of giving the same advice on career to everyone, I like to split my ideas of advice on how to grow to these two halfs: the ones coming in and getting started vs. the ones continuing to grow in contribution.

I'm also old enough to remember the times when I could not get to testing the code as it was created, but had to wait months before what we knew as a testing phase. And I know you don't need to be old at all to experience those projects, there's still plenty of those to go around. Thinking about it, I feel that some part of my strong feelings of choosing tester vs. developer early path clearly come from the fact that in that world of phases, it was even more impossible to survive without the specialization. Especially as a tester, with phases it was hard to time box a bit of manual and a bit of automation, as every change we were testing was something big.

Incremental development has changed my world a lot. For a small change, I can explore that change and its implications from a context of having years of history with that product. I can also add test automation around that change (unit, integration or system level, which ever suits best) and add to years of history with that product. I don't need a choice of either or, I can have both. Incremental gives me the possibility, that is greatly enhanced with the idea of me not being alone. Whatever testing I contribute in us realizing we need to do, there's the whole team to do it.

I can't go back and try doing things differently. So my advice for those who seek any is this: you can choose whatever you feel like choosing, the right path isn't obvious. We need teams that are complete in their perspectives, not individuals that are complete. Pick a slice, get great, improve. And pick more slices. Any slices. Never stop learning.

That's what matters. Learning.

Changing Change Aversiveness

"I want to change the automatic installations to hourly over the 4-hour period it has been before". I suspected that could cause a little bit of discussion.

"But it could be disruptive to ongoing testing", came the response. "But you could always do it manually", came a proposal for alternative way of doing things.

I see this dynamic all the time. I propose a change and meet a list of *but* responses. And at worst they end up with *it depends* as no solution is optimal for everyone.

In mob programming, we have been practicing the idea of saying yes more often. When multiple different ways of doing something are proposed, do all. Do the least prominent one first. And observe how each of the different ways of doing teaches us not only about what worked but what we really wanted. And how we will fight about abstract perceptions without actual experience, sometimes to the bitter end.

This dynamic isn't just about mob programming. I've ended up paying attention to how I respond in ways that make others feel unsafe in suggesting the changes, after I first noticed the pattern of me having to fight for change that should be welcomed.

Yes, and... 

To feel safe to suggest ideas, we need to feel that our ideas are accepted, even welcome. If all proposals are met with a list of "But...", you keep  hearing no when you should hear yes.

The rule of improv "Yes, and..." turns out to have a lot of practical value. Try taking whatever the others suggest and say your improvement proposal as a step forward, instead as a step blocking the suggestion.

Acknowledge the other's experience

When you hear a "But...", start to listen. Ask for examples. When you hear of their experiences and worries, acknowledge those instead of trying to counteract them. We worry for reasons. The reasons may be personal experiences, very old history or something that we really justifiably all should worry about. The perception to whoever is experiencing a worry is very real.

A lot of times I find that just acknowledging that the concern is real helps move beoynd the concern.

Experiment

Suggest to try things differently for a while. Promise to go back or try something different if this change doesn't work. And keep the promise. Take a timebox that gives and idea a fighting chance.

People tend to be more open to trying things out than making a commitment on how things will be done in the long term. 

Monday, March 27, 2017

The Myth of Automating without Exploring

I feel the need of calling out a mystical creature: a thinking tester who does not think. This creature is born because of *automation*. That somehow, because of the magic of automation, the smart, thinking tester dumbs down and forgets all other activities around and just writes mindless code.

This is what I feel I see when I see comparisons of what automation does to testing, most recently this one: Implication of Emphasis on Test Automation in CI.

To create test automation, one must explore. One must figure out what it is that we're automating, and how could we consistently check the same things again and again. And while one seeks for information for the purposes of automation, one tends to see problems in the design. Automation creation forces out focus in detail, and this focus in detail that comes naturally with automation sometimes needs a specific mechanism when freeform exploring. Or, the mechanism is the automation thinking mindset. 

I remember reading various experience reports of people explaining how all the problems their automation ever found were found while creating the automation. I've had that experience in various situations. I've missed bugs for choosing not to automate because the ways I chose to test drove my focus of detail to different areas or concerns. I've found bugs that leave my automated tests in "expected fail" state until things get fixed.

The discussion around automation is feeling weird. It's so black and white, so inhumane. Yet, at core of any great testing, automated or not, there is a smart person. It's the skills of that person that turn the activity into useful results. 

Only the worst of the automators I've met dismiss the bugs they find while building the automation. Saves them time, surely, but misses a relevant part of feedback they could be providing.