Sunday, December 21, 2014

Chess and Testing, Strategy and Tactics

I spent an evening reading Rikard Edgen's et al. book (in Swedish) about Test Strategy and I keep thinking, that we're confusing and mixing strategy and tactics. From a small note in the book referring to strategy of chess, I wondered into Wikipedia to read a little more about chess and picked up a few realisations.
Tactics are move by move, whereas strategy is about setting goals and long term plans for future play. 
A lot of the Edgren et. al book is about tactics - a huge collection of potential tactics. Knowing tactics is very relevant for the potential success in the game of testing. If your selection of tactics is very limited, your testing is limited. But tactics are not strategy.
Strategic goals are achieved by the means of tactics while tactical opportunities are based on the previous strategy of play. 
The article about strategies of chess did not describe strategies very successfully. And it was the same with examples of test strategies. Looking at them, they appear as lists of selected tactics, founded in an analysis of the product, most often not documenting the why part of the selection. Strategies and tactics are intertwined. 

I particularly liked the quote from Wikipedia article:
"Strategy requires thought; tactics require observation" -- Max Euwe
Observing what happens to your testing after a move you make in testing is not an easy skill. But building further into strategy is even more difficult. For each tactic, it's not just the what and how of the individual idea that guides testing - it is also the skills/knowledge for the person performing testing based on that idea for successful completion with the right results. Observation is needed in testing for both the information we're supposed to deliver and for the choice of next tactic to apply.

And the Wikipedia article on chess strategy offered yet another piece of advice.
In chess, one needs to become 'master' before strategy knowledge becomes a determining factor in game outcome over tactics. Many chess coaches emphasise study of tactics as the most efficient way to improve one's results. 
This leaves me thinking that it would appear to be also this way in testing. Perhaps we should focus on outlining and teaching tactics instead of strategy. Perhaps we already are, at least with Edgren et al. book.

Building skills for strategy is building awareness of tactics, so that you end up with choices to make that you need to think about in longer term. After every tactic move, you're better equipped to make also strategic conclusions about overall moves if your ability to observe is in place. Strategy lives, and with testing, there are perhaps quite few strategic choices that are set in stone so that changing direction with later learning would not be possible.

Chess has a defined end, testing does not. Thus in testing, you need to actively think about the choices - what comes first, what gets left out? And if you find something extremely relevant very late in the project, it could still get included.

If test strategy is "the ideas that guide test design", isn't test tactic "an idea that guides test design"?




Saturday, December 20, 2014

Temporal aspects to a test strategy as the next idea to guide testing

As I started working on identifying an experience report I would deliver for #DEWT5 in January on Test Strategy, I hit a wall of confusion that I have not quite overcome yet. What is test strategy anyway - for me, in practice? And which experience related to it would I want to share?

Test Strategy is the ideas that guide test design.

At first I was looking at my two more recent products I work with and I made a few observations:

  • On one product, I owned strategy and another tester owned testing. On the other, I owned both strategy and testing. I'm more sloppy on communicating my ideas on the latter. 
  • On one product, there is never enough testing but it's completely deadline-driven to do best work with what is available. On the other product, schedule is flexible to include the right testing before releasing.
  • For both products, there's nothing about releasing that would say that testing stops there. With continuous flow of releases, we react to feedback outside the time for the feature.
  • There is no test strategy document. There is nothing I could clearly call test strategy. Even the ideas of how to test are more of generic guidelines than a project-specific strategy.

Looking at the two products, I came to realise that the way we work on both of these products is continuous flow of changes into an overall vision of a product and having a strategy other than generic ideas of "provide relevant information" and "do the most important thing next" seem off. I would not call the checklists we create to model the product as strategy - they help think about scope of testing though. I would not call the Jira tasks that outline testing time boxes strategy, but they were a way of discussing strategic choices of where to use time, what to do in a shallow and what in a deep manner. But as skills grew, we have up even those tasks and just work on the development tasks without plans of any kinds - a day at a time. 

In relation to the changes going into the build to be released, I can well outline the work we have done and what we should do. I notice primarily choosing what we'll test and how by the set of skills available - I have developers test what they will manage (with a stretch), I'll have product managers test what they will have patience for and I'll test the stuff that is hard to give to people who are less skilled in the types of testing I do. 

It seems to me that my test strategy is just the next idea of an experiment to do to provide information by testing. I try to choose the one that I believe will at least be of value, with the principle that time will anyway run out. 

Looking back at a test plan I wrote at my previous place of work though, I've clearly seen strategic thinking as in identifying what high-level areas and ideas to guide those areas as very important. But there someone else owned the strategy and testing, and I would just suffer from the results of poor strategic thinking that would drive focus on a too narrow set of things. 

So this left me wondering: if test strategy is the next idea to guide testing and builds an idea at a time, the goodness of the next ideas relies on who is thinking up the ideas. Introducing more versatile ideas as strategy without implementing the ideas as testing could be a good approach then. In particular, transforming people who have one idea and then are out of ideas of what aspects to test for. But what am I missing if I just don't build anything more of strategy-like as I do now in my projects? 

Could test strategy be the ideas that have guided test design, built one idea at a time - the next? Playing with the temporal dimension of strategy seems at least an intriguing idea. 

Friday, December 19, 2014

Brilliant Tester Looking for Her Next Challenge to Take

Today was the last day for Alexandra Casapu (@coveredincloth) from Altom to contribute at Granlund (that's where I work as test manager / test specialist). I've known she would be leaving us for a few months, but no amount of preparation really is enough when you have to let someone as great as she is to explore into new challenges.

Alexandra started over two years with Granlund Manager -product and I clearly remember thinking many times about Altom calling her a junior tester. If she with her skills and drive for learning is a junior, I wonder what a senior looks like. Junior or not, I've tremendously enjoyed watching Alexandra grow as tester, reach out for new things and become important without making herself indispensable.

There's a few things I would particularly want to emphasise.

The last months, Alexandra has worked hard to transfer knowledge without creating test cases. Her contribution throughout the two years has been fully exploratory. I appreciated her mentioning today that she felt encouragement for autonomy but also support from me, and she really flourished with the autonomy smart people should always have. Her notes and explanations of what she has learned that could speed up the new people's learning and not remove all the knowledge she has built have been very impressive. We at Granlund have failed to assign the developer to be retrained as tester on time, so she has had to focus on structures. And luckily, while she stops as tester today, she will coach the developer in training for half of January's working days.

The issues she finds are to the point, clearly reported and well researched. And there is many of them. In the last weeks, I've needed to address the risks that us not replacing her with another exploratory tester will leave us with: 100 bugs every month that we have fixed, and are now unlikely to find until the developer has been retrained. And there's a long way to go with that. The product managers have learned to rely on her thoroughness and consideration in testing the features, and will have un unbearable workload without her (or likes or her). But we chose to try first the developer retraining for a new career before going back to Altom for another great exploratory tester when the production has issues in scales we've avoided with her and developers are firefighting instead of focusing on the new features they've promised.

She has worked in particularly challenging settings, still providing excellent results. My team she has worked with speaks Finnish. Writes requirements, emails, and Jira descriptions in Finnish - a language Alexandra does not speak. And she does not only understand (because she works hard on overcoming all barriers) but asks insightful questions those who can read the documentation in their native language don't get to ask. She has infiltrated herself with a team of developers, who don't offer information with weekly meeting practices and skype/flowdock discussions - and a local agent in me who voices out some of the silent information. This team's communication isn't excellent locally, and yet she manages to find out ways to patiently gather the information she needs.

The team's developers have told that her joining testing of certain areas is time wasted into long periods of learning, and she has shown them how true exploratory testers do things: learn a little, provide valuable information soon and deepen your learning to become a product / feature expert. She has surprised everyone with that.

And she has also significantly contributed to our Selenium test efforts. First with ideas. Then with tests that were not quite right for maintenance, but she learned. And eventually, with tests that run on par with any of the developers contributions. She is persistent, takes any learning challenge and drives through with admirable focus.

We would not want her to leave, but we also recognise and admire the reason she is moving forward: to learn about different things that will make her even better tester. As far as I've understood, she is now looking for a project where she could work face-to-face with the team instead of remoteness. So, should you be in Romania or should you want to hire a brilliant tester to work locally outside Romania in your project from Altom, I would strongly advice looking into giving her a challenge she needs. Ru Cindrea as the account manager we've been in touch with would be happy to talk about those opportunities from an Altom business point of view.

Funny enough, the title of this blog could apply to me as well. I'd be looking into moving to US, specifically California. Meanwhile, I'll just work partly to Finland from California, as I will be leaving there in just a little over a week. 

Thursday, December 18, 2014

Focus on some, be blind to some - need faster learning

I'm trying to think why true incremental development and co-designing features seems to be so hard. The reason I think of this is that just when I thought we managed to do something better, the empirical evidence suggests that we failed. Now the question is if we will learn of this.

We did earlier several features so that someone (our project manager) created a specification in collaboration with business representatives, turning their needs basically into paper prototype of user interface sketches. The developers would look at that, ask what ever and draw their conclusions on what to implement. I would look at that as a tester, see different things and when seeing the features in action, notice things that wouldn't work - either  because the user need and the design were out of synch or because the design was insufficient in relation to product context or because there were still minor issues developers had not noticed on the implementation. It was awful: at worst, we would do three rounds of redesign, as the feedback I would provide would trigger very good discussions with the business representatives, and we would learn that what we had specified was nowhere in the neighborhood of what we would actually need.

To make things less awful, we changed so that we all sat together to do the designs, discuss the needs and guiding ideas for the latest feature. As we discussed, the designs changed - almost completely. That is positive, we are much closer to what we would need with the collaboration. But as discussions tend to go, the vocal people tend to get too much attention. If we would note problems we had previously of not understanding availability of functionalities in different states of the program, it would hog our attention. If we would talk about the labels and button locations on the user interface (like the business people wanted, it would hog our attention. So with all the discussion, in retrospect, we lost focus.

There's major concepts within our program that guide all functionalities. They are like a frame that never goes away. We failed to talk about those. In retrospect, it was obvious to me. It was one of the things where we always fail, seeing features in relation to other features, especially system features. And yet, now that I'm testing the feature, it's obvious that we failed to deliver working software from a very central angle. There's a whole bunch of small fixes we don't need to do now, but adjusting things on the level of basic concepts might be quite much harder.

There's really one big mistake we keep running into over and over again. Not having all the information at once is not the mistake. Not being able to pass information we might in retrospect think we had is not the mistake. Our mistake is that we build things in too big chunks, and accept delayed feedback.

With two days before Christmas vacation and less than a week of work effort before a major demo, it's not exactly a fun thing to tell people that we added something that appears to work to the extent we need to demo it, but the old stuff we had is quite much broken. And that the new stuff only works in simple settings, if placed in realistic production scenarios, it fails in very relevant ways.

We have a nice demo user interface with nice demo functionality. But is that what the system is about - hardly. We need to learn new ways of working together and learning together. Perhaps 2015 with mob programming could take us closer. A new years wish, perhaps?

Wednesday, December 10, 2014

How attitudes towards testing have changed in 20 years

I have very soon - in about 6 months - 20 years in testing and software behind me. And it all feels like yesterday. I've learned a lot, and have so much more to learn. I love the work I do.

With this idea in my head, I was checking through twitter and all the retweets about my previous post that I explained in my tweet as "developers not studying skilled testing and telling that testers are not needed", a realization hit me. From the summary of attitudes I'm now facing with agile wanting to make my profession extinct, this is not at all different from what I was struggling with 20 years back. And yet its all different.

Back in 1995, very few people would even recognise testing was a profession. There was no professional networks in Finland, most testing was still done by developers. And where testers existed, they would be deemed less valuable, less appreciated. In particular, developers would insist on not needing the testing testers were doing, that the end users feedback was "realistic" where as testers were not. And I remember many, many developers telling how testers would not be needed, if only developers did a proper job at developing - like usually the ones thought they were that kept telling this.

The attitudes on this level were very similar, but there were two differences that I find notable.

There was less of a culture and community of testers, which has proved to be invaluable in building skills that has stopped most of developers I work with from talking shit about my contributions. Immersed in the culture of testing that testers co-create, a lot of tacit learning is passed, and with practice, that learning builds my skills of delivering just the right information. It also is a great support network if I ever feel upset or down, there's always someone listening, helping, offering constructive advice on how to cope and turn things better. Constructive and testers? Yes. Testers help the other testers, just as they are there to help developers and business people. Testers have a community of support.

The other difference is that developers have found testing of their own they did not recognise 20 years ago. It is not the same testing testers talk about, but they tend to use the same term as they still - as they did not then - study skilled testing. There's a whole smart culture of unit test automation, that James Bach and Michael Bolton justifiably choose to call checking. When there's a lot of details to monitor and keep together with short cycles and fast pace, the things we know of and keeping them intact has built a developer testing culture that makes the idea of developers developing to professional quality much more likely.

After 20 years, developers have found checking as means to enable quick changes, and expect a little less contribution on reporting the simplest errors over and over again from end users. Testers are stronger, together. But we still have not managed to get to a point where we could appreciate the non-programming people's contributions to creating software so that we would look in more detail what happens there. 

Monday, December 8, 2014

Would you please stop putting me in a box where I don't belong?

I'm a tester. I love being a tester. But I hate people telling me what I do since I'm a tester.

I know I do heavy job crafting (article on that for Rosie Sherry's online publishing pending), meaning that I interpret my role in ways I find suitable for me and my organisation, so that some of the things I end up doing bear no resemblance to my job description. But the core of it stays: I evaluate product by learning about it through exploration and experimentation (Bach&Bolton definition of testing recently).

There's a whole bunch of critical thinking and hands-on empirical evidence gathering skills I apply. There's a wide pool of business-oriented knowledge that I've acquired over the years that help me see things that many developers around me are blind to. And several product owners over the years have personally thanked me for valuable contributions to the success of their products with the mix of knowledge and skills I provide.

As a tester, I've earned one employee of mine half a million of euros by finding ways to release sooner rather than later - exploring risks in layers rather than following a plan of what we would need to test.

As a tester, I've enabled creation of a server product that was supposed to take man-years to create in less than a man-month by putting the ideas together in a completely different way that was good enough for that time and purpose.

As a tester, I've saved numerous hours for my current product manager, who personally thanked me on helping him see that the software could actually work and fulfill more of his business needs than he had been told by the team of developers. With the hours he saved, he worked on other things that drove our business forward in relevant ways - opportunity cost matters.

As a tester, I've suffered from reimplementing the same feature three times before it hit the mark and helped my team to hit the mark with one main iteration of the feature.

I feel very upset for seeing tweets like this:

I wonder where these unicorn developers and product owners are, who don't need / appreciate help from someone who identifies herself as a tester, since whoever I've worked with, do appreciate it. There's programmers who do their jobs so that testers find nothing *they deem relevant* since it's all just new requirements - stuff that the product owners *should* identify (but don't without helping hand).

I've spent a lot of time learning to be a great tester. I'm really, really good at what I do. However, on the work I do, there's much more resemblance to William Gill's list of the things Complete Product Owners would need to be aware of than to the simplistic ideas where all testing is placed in the development sandbox and automated. I'm more likely to be a product owner than a developer, but as a tester, I'm a very useful mix of those two worlds.

Like I just wrote for the talk I suggested for Scan Agile:
Testers don’t break your code, they break your *illusions* about the code (James Bach). Illusions may be about the code doing what it’s supposed to; about the product doing what it would need to; about your process is able to deliver with change in mind; and about the business growing with uninformed risks on the product and the business model around it.

I should add the illusion of the perfect developer and the complete product owner who don't need help on the list of illusions. I know great developers and great product owners, who can appreciate having an in-depth empirical analysis done by a tester so that together we create more value faster.  They are hardly perfect - I'm not perfect. But together we're pretty damn good!

I also need to comment on this:
I test different things in the same project differently - context matters. I don't create "abuser stories", except on rare occasions. There's plenty of work for testing without focusing on just the negative. I help create value more efficiently. So, please developers, stop putting me and the likes of me in a box just because you haven't met or observed in detail real skilled testers in your projects. There's so much the agile community should learn from the context-driven testing community. Trying to build bridges is tiring, when you need to all the time hear from people that regardless of your continuous contributions, there's a theory (based on bad research I might add) that tells that you will not be needed.

Skilled testers breed from a culture of testing. Agile is doing pretty good job trying to kill that culture out of ignorance.


Saturday, December 6, 2014

Skills and habits

Within the context-driven testing community, we talk a lot about skilled testing. Skilled testing is a great replacement for manual testing - a phrase that should really be banned as testing is done with brains and has very little resemblance to manual work.

We talk about a great number of skills. Exploratory Testing Dynamics cheat sheet by James Bach sums up nicely some of them. Critical thinking is core to it all. We need to be able to model and think of the system and it's context of use in many dimensions, observe, work with test ideas and design experiments, report and describe the work to evaluate the product by learning about it through exploration and experimentation.

I love the message James and Michael deliver on the list of skills they've identified: each of them is teachable. Each of them can be learned, and learned more deeply.

Skilled testing - and the skills of testing - have been my focus for a long time. We have a Finnish non-profit founded this year on this very theme: testing worth appreciating, as it requires skills that don't exist everywhere. Skill allows us to do deep testing (as opposed to shallow testing) and surface threats to value and awareness about risks.

A week ago, a consultant friend took an afternoon to spend with my team at work. Something he said after that session stuck with me with relevance to all the thinking about skills we work on. He mentioned that during our session of group programming, there were examples of skills that we have but habits that we were missing, and that we need to work with those skills more to build habits. The particular skill example Llewellyn used was checking code in for the version control without checking all the details of what we had just changed, something every one of us can clearly do but with that example, we clearly were not doing enough of it to make it a habit that would not scare us and require focus.

My colleague in test, Alexandra Casapu, did a talk about testing skills and she pointed out that skills atrophy. When unexercised, things you used to be able to do go away. This is very much related to choices on the habits choose to have.

I find this a good thing to remember. It's not just skills we're acquiring, but we're also turning those skills into habits so that we can effectively use them. Without regular practice, the habits won't get built. Some skills deserve to be left to side and let atrophy. Some habits we've built should perhaps be allowed to wither away sooner than later - unlearning is also needed.

Never a dull day when learning more. The choices of where to focus one's time just seem hard - all the time.

Monday, December 1, 2014

Getting great speakers for a conference

Over the years, I've done my fair share on organising conferences. Some of the conferences we call seminars in Finland, but that is just a name one-track conference. I've been the chairman, participated in quite many program committees and content advisory boards and organised smaller sessions to learn from all the great people I feel I want to learn. I'm not done, far from it. But at this point, I feel I have something to say from experience of organising conferences on how to get great speakers.

Let me first summarise my lessons:

  1. Many of the best talks (to me) are case studies / experience reports that consultants cannot deliver, and practitioners have less incentives to propose their talk into conferences. Consultants and wannabe-consultants are more likely to reply to CFPs. There's less women in traveling consultants, so if you want more women, you need to get out of the consultant sector too. 
  2. Invited talks can be targeted for inviting anyone, not just the same-old-faces that did well last time. If you walk around and talk to people, you will find great stories that deserve to be told. Some people think great conferences are about great names though, and great names usually need to be invited as they don't need to reply CFP to fill their calendars.  
  3. People in general appear to like being invited (recognised), been given feedback to build their talk in collaboration and not wasting their effort into suggesting talks that end up discarded. 
  4. Mix of CFP as in Call for PROPOSALS (starts a discussion, isn't yet a fully formed presentation) and invitations would be my recipe of choice to build great conference. 
  5. Paying for travel for speakers is a good practice. Paying for speaking would be a good practice too.  Organising work on the side in the surrounding community is a good practice too. Worst practice is not to tell if you pay for any of it at point of sending out a CFP. 

CFPs, inviting to participate on CFP and the uncertainty

A lot of conferences send out a call for proposals / papers / presentations - a CFP. A CFP is an open invitation for anyone to suggest what they would talk about. In organising a conference, we often seek to share the CFP for various groups, even encourage individuals to respond to the CFP without commitment of accepting their talk. Sometimes we just post it in the traditional channels and see what comes out.

Most of replies to CFPs are people who are selling something. They are usually selling their tool or their time as consultant / trainer. And then there's some people who just really want to share the good stuff they've done that others could learn from.

Many of CFPs are calls for presentations, meaning that your talk is supposed to be ready when you're submitting it. There's a significant amount of work to get a talk to that point and many (good) people get rejected for not having done enough prior to submitting. Some CFPs are calls for proposals, ideas of what a particular person could talk about, and with those the process of becoming accepted tends to include a lot more discussion and collaboration. You would usually be asked for a phone number and expect people to call you to talk about your idea(s). The distance from saying you might want to to talk to having a print-ready description can be significant. This form of asking for proposals is more on the side of asking who would have and which stories, hoping people would volunteer the information on themselves or their friends. It's also a lot more work for whoever is organising the conference talks.

There's very few CFP's that I've responded to, while there's a lot of conferences both in Finland and abroad that I've done a talk in. The longer I'm around, the less likely I seem to be to respond to a CFP.  It's not that I would not want to talk in the conference. I just don't want to prepare a unique talk with a lot of effort into it (and I need to do this before submitting) without possibility to discuss with the organisers on their expectations either lowering my effort as my topic isn't interesting in relation to other suggestions or increasing my likelihood of the effort being used for value - delivering the talk.

Recently I've responded to a few CFP's as I'm turning into a consultant again. One because there was a bet with a colleague - that I lost, happily. And two others because I wanted to get in touch with people in that particular geographic area thinking about future work opportunities. One because a friend wanted to co-talk. And there's one CFP that I've responded to without realising it was a CFP and not an invitation - a conference I will not contribute to again. Being clear on the uncertainty of speaking slot is a good practice.

You can also invite people to participate on CFP and that alone works a lot better than just sending out a CFP hoping people would catch it. You might have to ask many times, and at least I feel a lot of personal pressure on the fact that no matter how much I emphasise that I can't guarantee the selection as there's a different group doing them, I feel bad when people I've personally appealed to submit will not end up accepted.

There's a limited number of speaking slots anyway. We just look to fill them with best possible contents. Best possible may have many criteria defining it. Good value for listeners may not require a public open call for presentations at all. Like most commercial conferences don't, they just rely on groups of people giving advice on who to invite.

Inviting people is caring, do your homework

I've been invited to an advisory board of a Finnish conference every year since 2001 - that is quite many conferences. That particular conference is commercial, very popular and has had great contents built in collaboration with the commercial organiser asking from candidate participants what they feel they would like to hear about and asking professionals like myself what people should hear about, and putting those two together in a balanced mix. I take special pride in going to these meetings with a list of people who have never spoken before, with a variety of topics and knowledge of who could deliver what in an up-to-date manner.

To be able to do that, I sit in bars after Agile Finland / Software Testing Finland meetups and talk to people about stuff they are into. I make notes of who the people are and what I would want to hear more of. I'm always scouting. I use scouting for great presentations as an icebreaker discussion topic, asking what would be the thing you should talk about, helping people to discover what it would be. At first I did that to hear the stories told myself, nowadays I do it also just for the fun of it. It's a form of call for presentations, with a personal touch. And it works brilliantly.

I feel some of the comments in twitter about getting speakers are assuming that inviting means you invite people who have talked before. I invite people I've never seen do public talks based on how they speak in a bar. If content is good, I can help them fine-tune their content and deliver better. I've helped many people, and would volunteer for that again and again. That's how we get the best stories.

People I scout for are usually people without the consultant incentive to talk. They like being recognised for their great stories and experiences - they deserve to be recognised. And when invited, they work hard on doing well with their presentations.

Compensation issues

The last bit I wanted to share is that a lot of conferences still fail to make it clear on what is their principle on compensation. I'm sure you can get great local speakers, even some consultant without paying their travel and accommodation. Local speakers might be just what you need for your conference to be great - local pride of accomplishments. But if you are seeking to get people who might travel to come to your conference, it would be a good practice to pay for the travel + stay and state that in advance without a separate question on that.

I also believe we should start paying the speakers for their time in delivering presentations. Some communities pay for time directly as speaking fee, some pay by organising a commercial training on the side of the conference. The latter is very hard to do for many people for the same timeframe. Some conferences are built to have paid workshops on the side and allowing a workshop on the side significantly sweetens the pot for the presenter.

Time in a conference is time away from other paid work. There needs to be something in it. Marketing yourself could be it. Traveling to new places on someone else's expense could be it. Meeting people in new communities could be it. Or it could just be another form of work you do, if you would choose to set up conferences like commercial organisations you anyway compete with - sometimes unfairly, lowering prices by avoiding the speaker payments.

For example, why would I want to pay to speak at e.g. EuroSTAR? I have little interest to do that. But a usual track session there does not cover my travel and most definitely does not cover the missed income from time away from work. Being big and important means there's many suggestions in numbers, but quality might not be good, with way too many consultants / tools salesmen with a sales agenda. There's real gems in the masses, really great consultant / tools people talks too. But the ones that will not be listed could - I claim would - be even better. I base my opinions on being in program committee in 2013.

Summary

Not all conferences are the same. It helps if you think through the slots of the conference you're organising and create visibility to your expectations before CFP or invites. There's a lot of people who will volunteer to speak either by responding to CFP or by responding to an invitation. You'll never fit them all. You need to choose somehow. Choose by knowing the people, personally talking to them about the depth of their experience. Choose ones that excite you. Listen to a video of them talking. Ask around on experiences of them talking. Take risks on some of the slots if it fits your conference profile.

If you want gender diversity, budget the speaking slots for gender diversity and be prepared to create a balance of CFP responses and direct invites. If you seek for cognitive variety, you again would need a mix of CFP responses and well-researched invites. Only people who feel they belong to your conferences community will respond to a CFP. If you want cognitive variety, you would have to reach out of the usual suspects circle, and only invites would do that.

There's good in CFPs. They are a great way of announcing people with topics where people want to present so much that they are willing to do the work without knowing they get the value in delivery. The value for them may be to learn how to frame a talk so that it gets accepted. Or they may be fishing with the same talk in various conferences. Maybe they want to be at your conference and free entry is what they're after. Personally, I would not want to do the same talk for two major conferences. But that is probably just me. But at least you know that people submitting when asked would want to be there.

Sunday, November 30, 2014

Exploratory testing teaches a growth mindset

I spent a significant part of this evening re-listening to a great keynote I remember being moved about at Turku Agile Days a few years ago: Linda Rising talking about The Power of an Agile Mindset. I remembered the basic message from the first round of hearing the talk, so I spent this round on more of a personal reflection.

The talk is about fixed vs. growth mindset - latter which Linda calls Agile mindset. The basic idea with mindsets is that what you believe you're capable of determines what you are actually capable of. If you believe in effort and ability to learn instead of believing there's a fixed amount of smartness you were given, your results tend to be better. Actually, believing you capabilities are fixed, makes you worse over time. There's a point in the talk that I like in particular: Linda tells that regardless of what your mindset is today, it can change. And that changing mindset happens through emphasising learning and failing being ok.

On my career as a tester, I found a concept that became central to the way I think about testing: exploratory testing. To me, it is not a technique, it never was. It's an approach. But listening to the talk, I realised that as an approach, it has also been a significant teacher of a growth mindset.

When I collaborate with testers who believe in exploratory testing being pretty much all testing, we talk about learning. We talk about tests being experiments we set up to learn something of value on the software system we test. Experiments provide us information, sometimes failing to provide information we expected and we learn and adjust both our expectations of what is valuable and the experiments we would run next. We talk about testing being endless task and we encourage each other to start somewhere, fail and learn, try again, to put effort into testing in a manner that allows the best possible results with the time we had at hand. It encourages me to try and fail, so I can always try and fail without fear. It's normal, it's expected, it's what is supposed to happen. Trying is worthwhile and necessary. Learning that it works is just as valuable as learning the ways it doesn't work. If the system seems to work, try harder, work more, think more. Only through really trying to show things as broken you will get close to showing it isn't. Close but never there. We test with the attitude of loving the challenge, always knowing that regardless of the results, testing is a chance of learning a lot about the product and the context around it - and of ourselves.

As Linda on the video moves to talk about bright little girls, praised for good results not the effort, I realise how lucky I've been with support from the exploratory testing community and ideals that praise the effort. I work hard to learn but yet make my choices on what I will learn next as the most valuable thing that makes me better at what I want to spend my days on. At school I was a bright little girl praised for results, but I soon got into communities that praised the effort and learned to love the the ability to learn over results. Exploratory testers - or context-driven testers - have been one of those communities. It's not what you are, it's what you are on your way of becoming. It is great to be around people who encourage me to have a growth mindset.

When I do talks on exploratory testing (well, testing in general as all testing is exploratory), I seem to often remember to tell people that the best part of my work is that I get to learn so much. I get to learn every day, so that I'm just a little better every day, and that I believe this will be the case indefinitely. Learning takes effort and the effort is worth it. But as there's limited number of hours in the days even when you combine work and hobbies like I have, you will need to choose what bits you choose to learn. Various combinations of choices can help you be useful in software development.

Bright little girls in schools in particular would benefit from the message exploratory testers have to give on learning. I feel even more strongly that in addition to teaching programming, we should teach testing, and we should teach mindsets. We need to tell our bright little girls and boys that effort is important. It's just work to learn new things. And learning through testing is a great way of learning about how wonderfully interesting software is.

Saturday, November 29, 2014

Safe co-learning and learning about oneself

A few weeks ago I was asked to set up an opportunity to teach "Teaching Kids Programming" -course at a local elementary school in Finland while he was visiting. I contacted a local school, Ylä-Malmin peruskoulu, as I have my own son there on first grade and we set up a few sessions with 8th and 9th graders.

I volunteered to co-teach, but also felt somewhat of a fake. I love computers and software, but if you've ever read anything I post on my blog, I'm not a programmer. I've passed various courses on programming, even implemented occasional minor production features, but I love testing. Not checking (automatable part of testing) but testing - thinking and learning through exploring. Teaching programming, when I have high respect for people who enjoy that while I don't, it's a bit of an odd compilation. 

On the first class, I was walking around the room helping the students until I was at a point of me supposedly leading a part of that class somewhere in the middle of it. The first class was largest, loudest and had most difficulties in concentrating. I failed to get their attention and needed to give up - intimidated by the 8th graders. I could easily remember discussions with a friend of mine from high school who would become a teacher and me swearing I would never ever want to teach grades 7-9, the behaviour our teacher could expect from us back then was just this - not paying attention unless you deserved it. 

With failing once, I set out to not fail a second time, at least not the same way. I would ask to start the class, instead of jumping in in the middle. And then opting to jump out later if I felt I couldn't remember the material, as someone else's material is a very different experience than one's own. 

In retrospect, little did I know of what "co-teaching" would mean. It would mean that by Friday (3rd class), I would be co-teaching with a local teacher from the school and that by the last of four classes, Llewellyn would leave the class leaving us teaching. It also meant that the local teacher will teach more modules from Teaching Kids Programming -materials, as he was telling us he would run the course through spring semester with his computer science students. Co-teaching ended up being a great way of supporting us and enabling things to continue. 

All in all, this was a fun experience. But again the stuff happening inside my head puzzle me the most. I felt safe co-learning materials with the students. I realised again that I set my bar on "programmer" label quite high, and almost regardless of what kinds of programs I could write, as there were things I could not write, I would feel fake. I realised the bar is higher for me, and only set by me - perhaps a little by a brother of mine. 

I'm starting to feel that my year of exploring into code & programming will actually end up taking me deeper. I've found groups I'm comfortable with. I've found application areas I'm enthusiastic about (not checking!) to want to program. One thing has not changed though: I've always known it would be just a matter of choice to learn more about programming. Choice is a time-commitment though. It's always an opportunity cost - leaving out something else. 

I want to learn more together with my kids. Programming, exploring, different fun stuff we can do with technology and the wonderful application areas that I would feel particularly passionate about. Passionate enough to harness the tool of programming outside the "asking programmers do the programs" -idea that I've been working with so far, very successfully. 

Thursday, November 13, 2014

Facing my fears and feeling ridiculous

A fairly long time ago as a student, I wanted to apply for the board of the student union. I remember the day vividly with a strong memory of the feeling of panic I was facing as I needed to introduce myself in front of an audience of a relevant size. I was afraid of public speaking. Afraid was an understatement really, I was panicking. And the mere idea of having to introduce myself with a few sentences cause physical symptoms and while I did it, I was shaking so hard that people were really worried I would pass out - for a good reason.

I did not get to do what I wanted with the student union, but there were other ways of contributing that were probably more appropriate. But that one day and that one experience led me to realise that I want to work on paralysing fears and change things, instead of accepting them. I took a course in oral communications, went through the awful experience of speaking for the class with video and watching that video with the teacher so that I could not avoid it. I learned to talk about my ideas and experiences as opposed to others ideas and through that I realised I could control the experience for me. I could control the fear. I continued stretching myself way beyond my comfort zone, taking a job that included lecturing and starting to do public talks.

People who see me present nowadays have hard time believing the background story. I've worked hard to change my experience, and I still work hard on my presentations and contents. Professionally, it's been one of the smartest moves I've ever done. I'm no longer a least bit afraid of public speaking. When I felt discomfort on doing webinars, I did more of those. I go and face my fears and I grow.

Today, I participated on my second code retreat facilitated by Adi Bolboaca. Or actually, first one, as the other I participated I monitored from the side. The idea of pair programming made me feel panic. I have no problems pairing up for exploratory testing or clarifying requirements, but the programming part brings out irrational fears. I had not really realised how relevant the fears were, I had been coming up with all sorts of excuses on why I wouldn't join these events. I needed someone I love to tell me that knowing the background story of my fear of presenting, pair programming was something I needed to do. To go to the code retreat, I needed people to check on me to actually go and make room for this in my calendar, not giving me the option to back out.

Code retreat was a therapeutic experience for me as a tester, kind of like the video of me talking back in the days. I got to pair with five wonderful developers - one of them twice, who were friendly and helpful - and did not end up hating me like I feared. I did not feel useless. I felt I was learning, I felt I was even contributing every now and then. I felt grateful to people who encouraged me with the idea that it would be ok to pair with developers even if I could not write code at all. And with this experience, I think every non-programming tester should take part in a code retreat and trust that people who are that enthusiastic about building their development skills are also happy to learn about collaborating with people very different to them.

During the code retreat day I got to work on Ruby, Python and Java - not c# at all, which would have been the working language from my office as I found that too much of a stretch for me to begin with. All sessions ended up being Test-Driven with developers already experienced in that, and turned me into a real fan of TDD - I want to learn more programming this way.  While I avoided writing the code and focused more on commenting, one of the developers in particular was really nice and helpful guiding me into taking my turn in writing too without making me feel like the idiot I was setting myself up for. Well, the fact that it was Python and I had done a little bit of Python during summer did not hurt in persuading me to try that even without a proper IDE configured.

With this behind me, I recognise I'm not done. I did not win my fears, but I crossed the first road. And every day at the office is another road I can now cross. Some of the experiences I want include pairing with really good people - professionals in pairing. Some of the experiences will be pairing with people who don't want to pair on code with anyone, let alone me.

Main thing on my mind right now is: why did I think I would rather learn programming alone than pair up with people who volunteer to help me? Learning a little is still learning, even if I would like to be perfect. Having faced the first paralysing fear leaves me feeling ridiculous. How could I not see that wonderful people are wonderful, even when I'm afraid. 

Friday, November 7, 2014

Lessons Learned on Standardisation at a Finnish Standards Meeting

A few weeks back, I was invited to a Fisma (Finnish Standards and Measurement Association) meeting to discuss #stop29119. Fisma is the Finnish counterpart for international standardisation, participating as one member country in all the different committees someone ends up being interested about. The meeting was futile attempt to do anything about an impossible theme, but I came out with some lessons I considered interesting.

With what I learned there, my summary is this. The whole standardisation thing is based on fooling customers of standards into believing they might be buying something useful and respectable. Fooling, because the process of creating standards really sucks.

1. Power of individuals who volunteer in creating standards

Listening in for the whole meeting about deciding on standards and introducing ongoing work in the area of that particular working group, I came to realise how weak the standards creation process really is. It seems a standard emerges from someone having an interest to spend time on writing a standard and finding an appropriate (politically acceptable) title for it and then working on the contents through a great number of stage-gates.

There is no requirement that the standards authors would actually be really experts in the topic. It seems quite common that some professor / researcher from a random university volunteers time and effort on creation of a standard, and if people in countries are not really strongly against it, the process flows forward through various votes.

Countries decide which standards they participate on based on people's interests they themselves have available. If there's an agile standard on the way (and there is, scary!), Finland will participate if there is someone volunteering for the work. Finland (or any other country for that matter) opting out from the work does not mean that the work would be rejected. Rejecting requires active work and more often end result of disagreement is to create just one more standard with a slightly different title.  Organisations pay for being in positions to volunteer and organisations pay for the end results to finance a relatively complex system.

There is very little quality built into the process of creating the standards, quality is left for paying users to assess. The requirements of expertise are not exactly high for entry.

2. Early detection of problems only

The standardisation process is a process with many stage-gates. It was interesting to listen to discussion where a comment would be "how did we vote last time, we can't change our mind at this point". If you plan on changing the contents or getting the standard rejected, you have to be voting against it consistently from the beginning and build a case on it not improving. You will also need allies from other countries. It was interesting to hear that "Japan and USA voted against but this still goes forward - they always vote against". And still the standard gets created unless more countries are against with severe observations marked for the review process. With 28 countries voting with the process requiring severe observations to be allowed to vote against, getting a standard through does not seem that complicated. Voting to disapprove you get to go into a hearing - that had very negative connotations in the discussion. And being passive is approval. The process is awful.

Things such as early bad quality hiding more severe problems was not visible at all. If I have to read a document avoiding gaping holes, it is highly likely that the large holes steal away my attention. Regardless, changing one's mind is not encouraged.

3. No drawback mechanism and how a standard dies

#stop29119 calls for withdrawal of the standard, and I learned that the process of standardisation really includes no such mechanism. Standards exist until they become obsolete, if they have been accepted. They become obsolete if during 5 years a new working group to update - even slightly - the standard is not formed.

The only way for a standard to become obsolete is that no one volunteers to work on it. Since volunteers are again representatives of paying organisations, those who have interests in standards existence will drive the updating. A standard seems to only die of no one is willing to contribute into it financially.  #stop29119 people buying the standard for review purposes actually contribute financially to the standard to continue to exist. Not participating, not buying are the ways to kill a standard and even then, someone isolated can quite easily keep it alive as long as there's a business to be made on its existence.

4. Standards are just guidelines, you don't have to use them if you don't declare compliance. 

The whole discussion about that the word "standard" means seems like a bad joke after the meeting. Standards are all optional, so the standardisation process itself does not include the idea of it actually being required for compliance. Compliance comes from a user of a standard requiring compliance downstream. Standards are just prepared paper piles that create work for those who keep them alive. We waste our efforts into thinking that standard would be of any value, and that it would be based on anything other than the idea of someone being fooled into using it with vague marketing talk.

5. Thinking standards show an area is worthy

It was interesting to hear remarks that Testing is now worthy area, as it finally has a standard. That areas of relevance gather more standards. If a standard does not apply, an optional other standard can be created with just tweaking the title for more specific case.

The way standards are constructed is that you don't really mention the context / application area but leave that for the users of the standard to think about. Which again underlines the idea that you must be a fool to pay for a standard and that those who consider using the standard are the real audience to talk about what is bad about a particular standard.

CONCLUSION

So, lesson learned. Don't buy the standard. Don't finance it. Don't comment on it to help improve it. It might wither away and die in 2016 if no one starts a new working group to update it. As the Finnish national board participates annually only in 10-20 % of standards ongoing, we mostly quietly vote for yes.

The change must start with the users. Other than that, we waste our times and efforts on something that is just rotten.


Thursday, November 6, 2014

Power of stories and how a one hour presentation changed the world around me

This blog post is about appreciation that I almost forgot to express, and my public thank you for Henri Karhatsu for doing something small that changed  a lot.

Last spring, I was trying to find ways of helping my team's developers be injected with new ideas. I brought in two people from the community to talk to my teams for an hour each. The other talk was technical talk about single sign-on, and the other an experience report, story on #noestimates by Henri Karhatsu.

Henri's one hour at my organization had a huge impact - it has literally changed the world around me.

For my first team, it was the trigger for moving to continuous deployment through giving up estimates, and built on that experience, it has changed the tools we use and atmosphere we work in. It served as empowerment for people to start acting outside their perceived roles, and start trying things. This team had everyone in the presentation. The day after the presentation, estimates were removed and since then, a lot of other waste too.

For my second team, the message stuck with the developers but empowerment to act on it was only found later. The team continued for several months estimations and I feel part of the delay came from the fact that the story was not shared for the whole team as the project manager had not been in the presentation. This team too is now much lighter on estimates, and with support of an external coach, is learning more about how to change things and to take the best out of what we've got.

Henri did his talk as a favor for me, and looking back, I can only wonder why did I not understand to ask him for a visit sooner. This blog post is my public appreciation for a small thing in the community that made a huge difference in our organization. Ask people to share their stories for your organization. A small, short talk based on an actual experience can change the world. You learn who has great stories by going around in the community, and people sincerely want to help. It is wonderful.

This particular story was delivered with a personalized end message:
This picture is from our company web page, stating a message "Creativity is courage / daring". Henri came with a personal story that he brought back to a message that was key to what our organization is. And with that, we found the courage that was missing, and I should have remembered to tell him sooner how much I appreciate what he did for us.

Discomfort of specifying behaviors

There's a significant difference of attitudes towards specifications that I see in developer colleagues close to me. Groups with attitudes seems to be equally strong in numbers, so this is not isolated behaviour of one individual.

The first group seems to take specifications as something given from the outside. They seem to believe in this other person thinking through the things that need to be implemented, and take their own sense of responsibility only within the context of that spec. While testing and noticing that what was implemented just makes little sense, these people easily revert to defensive. They say that not being able to do things related to the purpose of the feature is a matter of opinion and that they cannot work on the fixes before someone specifies the wanted behaviours. Occasionally, they also get very strong voices in discussion of what is a bug vs. change, being right about the idea that most bugs would be things that they never considered in scope they accepted and thus changes. Getting hung up on the definition of defect vs. change does not help, it all is just work that may need get done and fighting over definitions is time away from implementing the necessary changes.

The second group seems to treat specifications given to them as useful fiction that starts conversations. These developers seem to be at ease with discussions that result in dropping features out of scope, finding the core of what needs to be done and implementing that first. They have no problems in making suggestions: it could be like this or like this, here are the pros and cons I see of these solutions. And their approach often seems to end up with a different solution than they suggested, building on the compromises and deeper understanding of what is being built.

Talking to people I place in the first group reveals a great deal of discomfort on how I would like to define the role of a developer in relation to their definitions. While I feel I thrive with the idea of making decisions of how things end up being (until changed as we learn), these people seem to be drawn way out of their comfort zones in making a decision and thinking through the consequences.

It seems I will end up having an equal amount of both in my teams, and I'm thinking about how to make the first group willing and able to join the ranks of the latter group. People in the first group are really dependent on someone testing a great deal for them, someone specifying for them whereas the latter group is more independent and self-contained. It has a direct impact on portions of other specialists we need to add to the teams. Both of these go with "developer" title but have significantly different results how ready things end up being after them.

Pairing people up is one approach, but it seems not to be enough. The rejection of responsibility towards specifying behaviours and externalising that seems to come from an early lesson learned, that seems very hard to unlearn.

I'm curious as to if people have found ways of growing themselves out of patterns like this, and what would be the key realisation for the change to happen. Some suggest that old dogs don't learn new tricks and we might just have to compensate for lack of tricks in one with another person in the team that will do the tricks missing from the palette.

I wonder when we will learn to build teams of growing individuals instead of thinking of filling role positions. Given a chance of choosing who to work with, I always take people from the second group. But I wonder if I would then be missing out on something with lack of this diversity.




Wednesday, October 29, 2014

Participating Hour of Code - a one person view

I've been thinking and preparing for quite some time, but some weeks back I got together the final details to take action. I went to http://code.org to be reminded that Hour of Code week is December 8.-14 which also serves as a nice start for the Smart Creatives Club I've been volunteering to run with my son's school as soon as we get all practicalities sorted.

Smart Creatives was a term I picked up from a presentation by Eric Schmidt at time I was trying hard to explain why I dislike the concept of "code school" that misses out a core of what makes me love work in technology.  The following picture is from the presentation.





Instead of growing programmers of the likes I meet mostly, I'd like to see my kids grow up knowing technology and programming, but putting that together with understanding of purpose (business) and creativity, and choose a corner / combination they will love the most. I really love testing, and find that out of these three, my focus is more on creativity and business expertise, but technical knowledge is there as well.

I've also grown really fond of this list that went around in twitter a while back. I don't want to teach kids to write code, I want them to learn ways of collaborating to create solutions they would find useful. I want to emphasize collaboration. 


Emphasizing collaboration makes Agile Finland ry a perfect home for the things I'm setting up. I'm really happy to be able to announce that my favorite non-profit is there for participating the code.org code week by having an open free Hour of Code for 30 kids on December 8th 18.00 - 19.30 at Leppävaara Library. And that Agile Finland is equally supportive of the private Hour of Codes I will do with two groups at kindergarten Viskuri (5-yo & pre-schooler groups) and group of 1st graders in Malmin peruskoulu. The one at Malmin peruskoulu will then continue as a monthly club, not about programming per se, but about creating together, in collaboration. So far I've decided we're going to be working on multi-media book created fully by the kids for the spring 2015, meeting once a month.

Last week I had the pleasure of meeting a 15-year old girl, who as far as I understood, did not consider computers as something she would be particularly interested in. Spending a week testing with us and being actually very very useful with the courage to speak out, I hear she is saying that testing might be fun. I've agreed to hire her for some of the work I would need done for my side business, and will invite her to join me to co-teach (being paid) the Hour of Code. She might not end up with the love of technology and testing I have, but I think this might also be infectious. Pushing code first isn't the way to keep me engaged, perhaps that could be true for others, like her, as well.  She's super smart, and it would be a loss if she missed out on the best job there is in the world - creating with computers.

I've just published a call for action for others to join for the Hour of Code in Finland, I would be happy to help out locally. There's a huge need for volunteers to show the ropes to the young ones, and I'd love to see us agree that code is just one tool yet a very powerful one.

Empowering developers to do great code

Good conversations are powerful. I made a new friend over organising Tampere Goes Agile last weekend. As he seemed curious about me presenting - something I don't get to do when I organise - he found a video of my talk in Latvia two years ago and watched that. Having watched, he pointed out that I said something horrible. At a point of the conversation I'm having with the audience, I refer to how things are at my current, back then new, place of work. I talk of large amounts of bugs and the number of people testing, and the idea that with a lot of bugs, there's a lot of fixing so adding testers might not be a smart or necessary move. And I say that developers don't care for fixing bugs.

I found that observation particularly interesting, since the answers I tend to give come out of context I work in. Back then, I was struggling with a new team that released once a month but were lost on all kinds of testing. Releasing once a month was "agile" to them. The work was very heavily lead content-wise by a product manager, who would tell the developers which bugs they could fix. The product manager would easily decide on refactoring as well, the team was very much powerless. The fact that the bugs got listed as feedback was new, and there was not an actionable mechanism of fixing that many issues all at once. It wasn't due to regression that the product was broken, it just was that way and the end users (or the product managers) wouldn't come back with that feedback unless they really had to.

Talking about that point with my friend made me realise that the world that empowered agilists live in is quite different from this one. You get a lot of say on how whatever you're implementing gets implemented and you work on making code beautiful, not getting instructions not to refactor from the product owner. You stand your ground on including what is important to include when doing the work, like (unit) tests. Refactoring is continuous, and there's not a need to go ask for a "six month project" to take time to clear the mess you've gradually created.

With my reaction of explaining more of the context I was in then, I realised two things. First of all, I realised we've gone a long way in the two years I've spent with them even though we still have a long way to go. Second, I realised I was using context at hand as an excuse for me to focus on things in my direct power. There are plenty of ways I could have focused more on empowering the developers. I could have just focused on helping them through the braveness to refactor, focusing on development skill instead of what I was into, the study of my own testing skills in whatever context I was handed.

What may be horrible for some can turn into normal for others, and my developer colleagues had been in that world of normal for such a long time that they had given up to an extent to the perceived lack of power in saying how they do things. They were lacking feedback through testing. At first when they started getting the feedback they had been asking for as I joined in, it really depressed them. The amounts of issues, the lack of time to handle any of those, that doesn't exactly turn out motivating. Dictating what was to be done from outside the team wasn't motivating. And with the lost motivation, the driving force was the perceived lack of power.

The core of my answer from two years ago still holds. I find no sense in hiring more testers, when the needed focus on a team is clearly on the side of fixing. But the idea of developers not caring to fix the bugs is an unfair statement. When pushed not to think for a long time and give in on the level of quality you would want to deliver, you may appear not to care. But often that is just a way of coping.

I know my developer colleagues a lot better now. I know they care. I know they can do things. I've had people over training them on unit testing (and refactoring while at it) and coding in general - things they did not get to do enough. And they've found their inner strength to feel empowered to do things better through teamwork and new technologies that boost their motivation as certain things (performance related) are now in the realm of possibilities.

The same team that I commented on two years ago as developers who don't care now work in smaller chunks, reacting to recent feedback instead of having to fix a pile of problems that had been left back from the lack of feedback they always requested.

I just love the power of good conversations, that leave you thinking to a point of writing a blog post in the middle of the night. This conversation in particular is a good one, the core reminder for me is that the positive thinking on what my developer colleagues do turns into a positive change over time. I should be careful implying that people who have not found the power to use their smarts would be intentionally that. Empowerment and encouragement are game-changers for the better. Timely feedback supports that. And there's more I can do to help them, faster.



Tuesday, October 21, 2014

Two things that bother me on #stop29119 discussions

Let me start with my stance on #stop29119 - I have signed and I think everyone in any way impacted by testing in software should sign it - customer/product organizations trying to succeed on software business in particular.

But still, in all the discussions there's two points that bother me and a lot of the critique revolves around.

  • 'Standard' the word and its definition
  • Available options for the contents of the standard

'Standard' the word and its definition

A few days back, I saw Lisa Crispin tweet a reply to Michael Bolton that I'm strongly paraphrasing from my memory. The reply was related to a discussion on agile testing as per Lisa's new book and pointed out that not everyone appreciates time spend on word games as they have a full-time testing job to attend to. It caught my attention, as it was pointing out a problem I feel I'm having with context-driven testing arguments, and a problem that is in the core of discussions for #stop29119.

Many people suggest strongly that there should be no standard. Standard the word strongly implies a connection with the regulations, and through that, compliance and lawsuits of non-compliance. I get that. I agree that standard is a very risky word to use if we mean 'guideline'. But I think I hear the other side on this as well.

I live in Finland and Scandinavia is quite different from USA. Really. The whole discussion about standards isn't that big of a monster in the world I live in. It becomes a monster occasionally with EU regulations and it's always been a monster with things regulated from USA. But in the little fluffy cloud I get to live in, it just doesn't matter that much. I'm sure its not equally bad to everyone in USA either. But in this particular fight for #stop29119 the definition of that word becomes a key issue.

Standard the word has many contexts. After all, we're talking of context-driven testing with testing area not having set meanings on the terminology. Words are communication between people, and for testing terminology we don't believe in set terminology but trying to understand and hear out what the other party is saying. How come the word standard is that different from all the other words we use? How come we can't accept that standard in one context is regulation and standard in another context is guideline.

The fact that this guideline ISO 29119 is totally worthless contents that make things worse is a different story. But the basic idea of attacking the word standard seems like word play I don't even care to win. The risk of compliance requirements is relevant for certain contexts.

Available options for the contents of the standard

The other point that bothers me is how many of the opponents seem to handle the critique of not being constructive as in offering real alternatives. From the friendly, yet disagreeing local discussions I've had, I've caught a strong feeling for the organisations driving these standards, there's a real need of a box they label ISO 29119 but that the contents of that box could be something other as well. It might be that I'm optimistic, but I don't like the answer we tend to give on what to put in the box: nothing. We don't do standards. We don't believe in them. I don't believe in them. But I can respect that the idea that drives my local colleagues towards a standard is a true belief in helping out the ignorant in getting good testing. That box needs contents.

I have a personal suggestion as to what to put in that box. I want to put Rapid Software Testing in that box. But to do that, we'd have to get past the idea of the word 'standard'. 

I have an option for the content of the standard. So, the ball is again on the standards organisation: if context-driven (or rapid) testing is no different from the current contents, how about changing it radically. 

Testing starting from that base of values and ideas would be much more relevant that the paper-piling approach the current contents drive towards. It would be helpful to take things forward for the world of agile and still remain relevant with the more traditional mechanisms. We can't say the same about the current contents.

Win-Win?

To get anywhere from here, we need to start making compromises. Compromises to how strongly we feel about the word standards, compromises on how serious we are about no standard-like documentation about testing can exist. The standardisation organisations need to win too, it's not exactly fair to say we want them to stop doing the business they are in. Or is this really something that can, by no means me resolved?


Disclaimer: I'm tired of being attacked for the choices of words or my opinions. This post is very much my opinion, and I'm not going to stress over my choice of words for other than my good intent. I'm happy to engage in discussion that aims at mutual exploration of what (and why) I think that changes my mind. But please, picking on semantics of words without the intent of communication - let someone else do that.   And reminding me on the fact that I'm not a native speaker is somewhat insulting. I just don't care for the exact words, I care for discussion and understanding of agreement or disagreement - communication.