Thursday, December 27, 2012

Testing fast and slow

A few days into yet another monthly release, this one with team developer's feeling less motivated as the work was analyzed for them and they were asked not to think too much. End result is a nervous tester in the team, when bits and pieces are developed in isolation without much consideration of what purpose they're supposed to serve. There is no spec, and there's a atmosphere of denial for all issues that I may point out. Not a normal case, fortunately. Pressure is on for others as well, I think.

Feeling the schedule pressure makes me test in a different way. I try to do a lot in a short interval of time, and I report bugs quickly without isolating them in detail. I realize I do this with the chance of buying time - with enough relevant issues that we're remotely aware of takes away the nice fuzzy feeling of "all is well since we did not see any problems" but turns us to another nasty, unusual case of having lots of issues were unable to fix because we can't yet reproduce them. And  not having the repro steps is a way of dismissing the info too.

I just realized, looking at the differences in how the business owner reacts, that we really need to talk about the difference in testing fast and testing slow. Changing the pattern here is justifiable, but quite confusing when done without a warning.

I'm not sure which one is worse:
  • releasing with hints of problems we cannot reproduce (yet)
  • releasing with a few more fixes (due to nicely isolated bug reports) but not knowing what the users will experience
Seems with the time left I can't have both.

Tuesday, December 4, 2012

Introducing Test-Fix-Finalize week monthly

About three months ago, being the only tester in my team of 8 developers all finishing their work in the last minute to release, I was thinking about possible solutions I'd go about suggesting. I started discussions with a project manager and my personal manager with the idea of introducing agile planning and following remaining work so that testing would be included. For various reasons, right when we started talking of this I realized this is not about to go through: the project manager was not ready. So I changed what I suggested and went with the "need time in schedule for testing to happen" and "one of me can't do all the checking & exploring on the last week alone so team needs to help".

The idea was formulated a bit more and soon I found myself in front of a steering group allocating the whole team to "testing" for the last week of each increment we decided to call Test-Fix-Finalize. It's very much a waterfallish approach to leave testing (some testing, not all though) in the end and it's really not what I see us aiming for. But, with a team with no automated unit tests and a culture of "let the others' do the testing" this seemed like a smart next move.

For the Test-Fix-Finalize week we identified our team would be working on brain-engaged testing (sometimes pairing up), fixing whatever was found and needed to be fixed and for any time we could, on developing the unit tests further.

We have now two of these Test-Fix-Finalize weeks behind us, and I feel puzzled.

On the first one of these, we tested to find nothing that needed to be fixed. Yet, there has been few issues in found in production we missed. This doesn't surprise me, though - there's quite much width and depth to what might need to be tested, and the our best bets for choices did not pay off in the optimal way. Most of the team did not work on the release testing though and looking at the issues the added investment wouldn't have been worth it either. Some of the developers worked on unit tests and refactoring to make them possible, and others chose to do manual testing of a major refactoring that looked awful lot like fixing and developing. One opted to help me with the release testing, and while I found something yet nothing that was newly introduced, he just found nothing (yet the area he works in cannot be touched without finding something).

I talked about the experience to realize three things:
  • The developers can get better at checking, but their style of manual testing may still long be just confirming their designs - lots of paired work may be ahead while some already is done. Having them do work they don't know how to do (and sometimes feel like don't want to do) will not provide the results we seek.
  • It might be a better strategy to use all weeks, including the final week on some test automation stuff since that approach wouldn't leak much more than the current one but might get better over time. We could take the risks while development still is as careful and fast to fix reported issues as they are now. 
  • The themes in this particular increment were such that there was not as much to test as had been when I introduced this week. The cycles in how things come in had made the team focus on fixes and enhancements and not major features. 
Along came the second Test-Fix-Finalize week. In this cycle I had all developers focus on unit tests, even the ones who wanted eagerly to do manual testing that looks like fixing. We talked and I suggested they'd use the week as an opportunity to learn and try things out - quality over quantity. This month we had significant new features coming in, in the extent that at the start of the month we were already discussing how we'll cope with the last week at all, to learn that we could set up schedules and order of work now - finally - in the order where we considered testing timeframe outside the last week. So to my surprise, two of the three major change areas for the month were tested before the last week. This time with less testing hands on the last week than before, we found two issues that needed addressing before release (out of all the ones that need addressing at some point). One of those issues was such that it alone was worth the effort. Then again, being realist, I know that it would have been caught by the next layer of our internal product management trying things out too - it was way too visible to miss.

End results I'm still processing. There were significant individual differences on whether people could grasp the unit testing and a fair amount of disbelief it will help us finding the things that end up breaking due to their nature looking back at the time I've had the pleasure of being there. We're also noticing that introducing new developers (which we've needed to do) is tipping the scale to another direction than the seasoned product expert developers on the amount of unintended side effects.

One good thing on these experiments, though. I have something to show on the results, and the project manager may finally be ready to bend towards us planning the release together in more of an agile-style planning.

Tuesday, November 27, 2012

All It Takes is One Wrong Bug

I feel to urge to make a note on the other side of what I posted yesterday on the value of having bugs in the product - to build a customer relationship through quick fixes and avoid spending time finding them.

First of all, the bugs that I mention as samples in my "since April" timeframe are a small portion of all bugs logged and even smaller portion of all bugs fixed. These are a special category of things that when found - by real end users, and important enough for them to find the hard route in their organization to complain about it to us - will be addressed differently. We talk of them as branch fixes, basically just indicating we'll do single changes in two different versions separately to fix them.

I'm not happy with the amount of branch fixes. Our product managers are not happy with the amount of branch fixes. And most definitely, our developers are  not happy with the amount of branch fixes. All I'm saying is that from the product manager's side, they did not actually see the valuable side of these in building the customer relation before they were asked about it. And there are other ways - while my theory still is that maybe not as effective - to get customers to answer the calls when sales people call and build a customer relationship on mutual successes.

Not all of our bugs are quick fixes. We've just been lucky that the ones customers have complained about (perhaps the ones they have understood well enough to complain about) have been that.

Just yesterday we were fixing something that was relevant for the users (internal users complained about it) that wasn't so easy - a browser specific issue on a functionality not working when clicked one way vs. the other. There was a workaround, not everyone would hit that but definitely, not an easy fix. I recognize from my team that if this would be one of those branch fixes, they'd probably first hide it and take more time for the actual resolution.

In addition to only mentioning, intentionally, one category of bugs we deal with, I did not mention that the timeframe I'm looking at is incremental feature addition to an existing architecture. The sister project I also work with is in the early stages of building the frame, and much of the issues there take longer to resolve. The frame for this product has, with all its limitations, been in production for almost two years, and I imagine (and have read from the bug database) that the earlier phases of the product lifecycle were different.

Eventually, all it takes for customers to  not recover from finding a bug and getting it resolved quickly is one bug, that is relevant enough. Relevant enough to consider moving to competing products (assuming there was an option...), relevant enough to just make a move where the compensation for the customers losses would be something they seek.

One bug could still bring down important customer relationships. So for the bugs that we'd want customers to find, I still argue that we'd like to be informed - through testing - that none of them is the business-killer -type.

Monday, November 26, 2012

When Bugs Have Positive Business Value

There's a theory I'm working on analyzing in my current project. For customer value experience in total, it may be good to not find bugs but let customers find them for us.

As anyone reading my posting should know, I'm a tester. I believe in finding and fixing relevant problems before release. I believe we shouldn't steal the user's time with the annoyances and unworking features, when they actually buy the software to do something with it they should actually  get to do what they expect.

My current project is making me question my beliefs.

First of all, if we don't put so much effort into testing, we are able to produce more in quantities. The quality that we produce isn't great, it doesn't live up to my expectations and perhaps not other people's expectations. But it isn't that bad either. You can usually do things with the software. It doesn't stop you right at the footstep, without letting you see in. There's scenarios that fail, but often there are also optional scenarios to the same thing that don't fail.

Second, when customers find bugs and report them to us, we are quick to fix them. This is the core of my theory: there's actually a lot of value for the customer-relationship on these little shared successes. Customer finds an issue, annoying enough that they call us. We take it seriously, and fix it quickly. Customer, comparing this experience with some others where the issue is logged and it may come some time in some hotfix delivered in 6 months, gets really happy with the fix delivery. And as end result, the customer relationship is stronger, and it may even be that the second call back to the customer telling the fix is available includes also expanding the use of features for another project / feature area - sales happening.  

So far I've realized this approach is vulnerable, and it's really still only a play in my mind:
  • If we get too many fixes in short timeframe, we wouldn't be able to keep up with the quick deliveries of fixes - but our quality, as limited as it may be, has not gone to this level yet.
  • If the customer profile changes so that they'd end up contacting us on different issues on the same days, this would also ruin our ability to react quickly.
  • If the software delivery mechanisms changes so that the servers are not quick-and-easy to update, that again would destroy it.
  • If the development team members change, it will eat away the quickness of fixing, as there's more analysis and learning to be done to do a successful fix.
I'm thinking right now, that the work I do as the teams's tester, might actually currently decrease the value for the customer. While features may work better, they may work better in ways the users did not find relevant. At least the testing I (and the team) do means that we deliver less features with the same amount of effort.

The bigger value in quality is about the work that the team must do. It's not much fun to actually fix issues come in later, having forgotten about the details of implementation by that time. It's not fun that you can't make (and stick) to a plan for more than half-a-week, because you always need to be alert to implement the quick fixes. The bug-time is away from implementing new features.

Quite an equation we have here. After this quick note of it, I need to actually spend time breaking it down to something I can investigate for this particular context.
 

Friday, November 23, 2012

Mention an issue as action

Some time ago, I learned the words mimeomorphic and polymorphic actions from Michael Bolton. The concept of actions that machines can mimic and actions where you expect a different thing feel relevant for testing, so I went for more information and just got my copy of Shape of Actions by Collins et al. 

Just reading the first chapters of concepts, I notice I'm already thinking about my actions as the team's tester. I noticed a pattern of what I intentionally do that I did not pay much attention to before: action of mentioning issues.

This team's developers are onsite only Mondays and Thursdays, so those are naturally more communication / problem-solving oriented days. In this product, I have the habit of logging into Jira every morning as my first thing, to see what new comes in - from customers, through product managers or from product managers. I also read the comments people make on issues of others with the intent of learning about the product.

Yesterday two issues were particularly interesting related to the mentioning them -action.

The first one was a comment on an issue that could not be fixed since it did not reproduce and as per error message was "external". Reading the comment I could not help but laugh out loud, as this was the first time I saw something like this in this team's comments. The team is usually so well in system ownership, that this is out of ordinary. As my amusement already disrupted the peace, I could go right into mentioning what I just thought of. As reaction, the developer next to me went to the code mentioned in the issue and came up with a solution we can do. By end of the day, there was a solution to the external problem, that was easy to dismiss by someone else - a newcomer to the team. What especially stuck with me was the close-by developer who suggested the actual fix saying "we here have learned that the external issues will come back to us, so it's just better to handle them right away".

The second one was an issue that was noticed in production, and had probably been there for a while. There was no owner assigned for the issue automatically, as it was in an area with somewhat unclear ownership. So instead of waiting for those responsible for allocating the work to find someone appropriate, I just asked about the issue. I asked if we had done any changes that could trigger this problem. Two developers found it interesting problem, and within the next hour did some paired investigation of it, resulting in a fix.

I realized that whenever I chose something to mention, things like these would happen - fixes would be created. Mentioning too many would ruin it for all of us, but mentioning just the right ones helps make the product better and the team feel small successes. Yesterday's two mentions reached out to our product management too, and I was delighted to see an email this morning from the head of product management telling she "buys this fix" with an emphasis that something good had happened.

Thursday, November 15, 2012

Reminded: some things are easy with code

As part of my usual routines, I reviewed problems found recently in the versions that have been released to production. I noticed one in particular, where it was evident that this one is a recent introduction of a problem in something that used to work.

In particular, we introduced in the latest version a feature for our combo boxes, to make a selection of all vs. none with one click. The change wasn't very large in development size, and as it was not considered very risky (still isn't), I tested it by sampling some of the combo boxes, focusing on ones the developer was more unlikely to check as they were not in his own area of responsibility.

The issue found in the production version by one of our product managers - not by the customers who don't seem to complain about things more relevant than this. The issue was that one of the combo boxes in his area showed more items than the count would tell - the count was always one lower.

From seeing the bug report, it was quite easy to tie the connection. We just changed the combo boxes, and then one of them doesn't work as before.

I talked about that with two developers, one who changed the combo box and one whose area had the combo box that was causing the trouble. We looked at that piece of code, to notice that the call for the combo box was not implemented as instructed. A simple one-liner to fix.

My first reaction was thinking how we could have tested better, to realize this particular combo box was different. Right after that thought, I realized that there was nothing that told us that this one was the only different one. So, already next to the code, I asked if this was elsewhere. With few clicks, I was told there was 686 mentions of the method in the code, and a comment to just give five minutes and I could have my answer.

Five minutes later, I had learned that that was actually the only one left with that particular problem. There had been another one identified during the month, and fixed. But no one asked the question if this was also elsewhere, so we failed to learn all the things we could / should from a sample.

This reminded me of two lessons I seem to keep forgetting:
  • Some things you shouldn't test for with the software, there are just easier routes to find the information you're looking for. 
  • Encouraging people to address a family of problems instead of the mentioned problem is something that needs emphasis.
When I shared this story to a more distant colleague, the first reaction was that test automation could have caught this. I think not - we wouldn't have automated a check for all the 200 + combo box instances for this particular problem. Instead, we could easily ask if there was something other than testing with the software that would help us understand if it could fail. And a combo box that has always been a problem because it's sensitive to exactly right way of calling was a piece of information that we would have been able to use, if we worked more as a team and less as individuals each doing their separate responsibility.

Friday, November 9, 2012

Finding problems before and during development

Recently, I've been thinking about specification by example. For the last two days, I've been thinking that with the datapoints I'm seeing at my work, it may be an investment not worth the effort we'd put there.

In the last two days, I've logged plenty of issues on a particular feature area not working as it should. It actually works as specified, or within the limits of how it's been specified, but just doesn't serve its purpose.

I realize I could have - if I even looked at the thing earlier - told about some of the issues we'll be facing later on. Now that I tell of them, they're concrete behaviors (or lack of thereof) in the implemented system.

But what surprised me is that there seems to be very little rework, but more of extra work that just hasn't been done yet. And for now, it seems that the risk of regression isn't that big a deal either - with the type of system, unit tests seem to do a decent job on their own.

I reserve my right to rethink this. But I just find it hard to digest, that using more effort before development than less during development would be the way to go.

Thinking through what sticks with me from Rapid Testing Intensive

I participated end-of-October - as an usual student of testing - on a training, that I was really not sure what exactly to expect of: Rapid Testing Intensive by James Bach in Helsinki. The course outline said we'd be testing something for two days and learn from this mini-project. Someone tweeting about this gave me an impression this would be RST with just one software, seeing how it works in real projects, so I wasn't really sure what that meant. It could mean, we focus on the bunch of heuristics and trying those out in practice, or we focus on the doing testing and understanding the flow of it. I feel latter was closer to what happened.

To sum up what I remember we were taught:
  • Testing actions recipe to cook from: intake, survey, analysis, setup, deep coverage, closure - where any of the types of actions could be a focus of a session, there could be mix of two, and the actions could take place in orders other than assuming there is an order.
    Over the two days tried out different types of activities: asking about the project, finding out what the product could be about by testing it, using help files in testing,  making a test strategy to focus our testing with risks and specific testing activities we could do, tried working with generated data, and reported our status and results. 
  • Talking about testing, the difference between a professional (focused for the audience) and a comprehensive test report, and in particular the three levels of reporting testing: status of the product, show it has these bugs (what you did and didn't test), why the chosen testing and how we know if it was good.
By the end of the course, I was more inside my head thinking of how to structure the mess better than with the activities people could imagine they view from the outside. I stopped to think about how I think and feel, and how my choices and somebody else's choices would be different. I realized that I personally dislike the concept of "emergency testing" of something I have no previous experience on. When time is way too little for doing anything professional, I have the tendency to focus on playing time - just finding something that would delay the release. And when I feel that there's nothing, for this particular context that would buy time, I noticed I realize what I should have done too late - when we're already out of time.

We tested XMind, a released version. While the course is half make-belief of an end emergency testing situation, I couldn't help but thinking that this is what they have already released. Would any of the bugs we currently have in production actually made a difference to timing - perhaps not. And if not, what's the rush?  Remembering which parts of context are imaginary for course purposes and which parts would actually be things happening with that particular product and its release decision got me confused a little.

Since I did not want to miss out on my notes of what was said and what we were told, I used a lot more of our testing sessions wondering somewhere else in my thoughts, and actually testing the product like it was real project. That was my choice, to take time from learning and digesting. I probably went to find our a bit different things than others, my main point of curiosity was not towards how would I test better, but how do others teach this.

A great course, all in all. So much like my exploratory testing work course, expect that personal preferences make us focus on quite different aspects. It was like comparing coaching styles with a true expert - without saying that out loud. Only thing I regret is not making the point of being there with my team's developers - they would have learned so much.

Thursday, November 8, 2012

Funny how my mind works

As I split my testing time between two projects / systems at work, it's fun to notice the differences in the projects. The other project is a new product, originally with very little features but as usual, growing with time. It' not yet in production, but will be in about a month. Being there on-time has been a great experience.

Since I started with the project early in the development, we talked of what information would be most relevant, and that guided what I reported. I could always use more time on this project, but the other project with already-in-production setting also needed attention. So I'd just try and do the best I can with the time available. One of the things that was deemed not so relevant was certain types of typos - in database contents. I learned to skip them without being too much annoyed.

Then with the new features being added, in came a reporting feature, that would take the contents and show them in a deliverable that is clearly intended for our customers. Previously the interfaces I had were for internal use, but this one changed the game.

I talked with the project manager about the typos, and after the first sentence, he said they're not relevant. I smiled, and told him that I find it funny how my mind works: it was natural to pass them, while they were visible on the internal interfaces, but now that my mind is set up for the idea that we give THIS to our customers, I have this feeling that they may think that typos, especially a large number of them, is sort of unprofessional. I could see from the project managers face that something clicked, and he continued, not with a "but" but with "and" - and it would be so much easier to fix those before going live, as there will be dependencies on stuff created when we get this out.

In just a few sentences we changed from "not relevant" to "good idea". And really, my mind works so that when I recognize the type of the user, I notice different types of issues.

Wednesday, September 5, 2012

Leaving bugs is better for contractor business?

There's an ongoing discussion in Finland, in Finnish, about customer - contractor relations and how that relates to testing. Michael Bolton was curious about a blog post I did in Finnish, but instead of just translating it, it try to tell you the story that comprises of that blog post, an interview in a Finnish IT-magazine and another Finnish blog post that started the discussion.

Within the testing field in Finland for quite some time, customer - contractor -relationship has been somewhat of a hard topic. It has a lot to do with the testing community, since we're a community that shares experiences between companies, and we have a clearly two groups of overall needs in education: needs of those who create software products and needs of those who buy all development from outside / sell development work to the buyers' needs. It has seemed, for quite some time, that the customer - contractor testers are unhappy and powerless as per how they describe their challenges, and the product testers quite empowered.

To start the discussion, there was a post in a Finnish blog that was anonymized, where a tester told a story of some project happening in the public sector. Main points were:
  • Customer organization decided to use a testing company to help with acceptance of some SAP-based system
  • Testing company found a lot of bugs, and project steering group (both contractor and customer) kicked out the testing company after 6 months.
  • While testing company was present in the project, the other contractors (ones developing) were making an effort in fighting that the bugs were not bugs as per what had been agreed with the customer saying the testers could not test, tested against the process, tested the wrong things. There was shouting from the software contractors part, and refusal to fix issues on a fixed price
  • Assumption from the project was, that development contractors had gotten used to billing bug fixes separately, making their profit there, in continued development billed separately.
As continuation of the story, I got a phone call from an IT magazine, being one of the local testing experts, asking about this story and my comments on it. The article was called "Buggy system is a money-making machine for the contractor". The main points in the article were:
  • Testing people interviewed find that stories like this are typical. Getting bugs fixed on a fixed-price project takes a lot of effort.
  • It's better business for the contractors to get a lock in on the customer with the project and fix bugs in upcoming maintenance projects billed separately.
  • Not having enough time to test in the end means the problems are found late at a time when they are separately billed. Typical guarantee-period is 6 months from end of the project, not from production date.
  • The customer may need to test for days to find a critical issue that takes 0,5 hours to fix. Leaving testing to somebody elses wallet is tempting. Fixing bugs only is cheaper than testing the bugs out and fixing them.
  • There's not one guilty party, but these come from conflicting business models on customer-contractor side, which makes communication and agreeing hard.
  • As solutions, suggestions were twofold: incremental delivery + testing, and a relational contract that would place money for bonus, changes and defects in the same pile, helping mitigate the adversial relationship between the customer and the contractor.  
So, to this background I wrote my blog post.

Leaving bugs is better for contractor business?

Product development is different

My current work is product development. That means, I test for an organization that has the customer and contractor parts in the same payroll. One of the systems I test has external end users, the other has even the end users on the same payroll. For the system with external users, it's clear from logs and reports that given the freedom, the users will not follow any of the paths we assume, but given a crowd, they create various kinds of scenarios of use we may not have originally intended. For the system with internal users, feedback is even more direct, seeing the colleagues face-to-face on the hallways.

This particular work is new for me, I started in April as the first and only tester in our software development. The system with external users has been in production use for some years now, and I find that I would insult my team members saying the system has not been tested. However, results I'm providing indicate there is testing that has not been included, and point of views that have been left out without the extra investment (time & skill) on testing specifically.

My day-to-day project work is finding problems - bugs, as we say. Bug is anything that a user finds irritating, blocking or slowing. Bugs can be missing essential functionalities we never specified before. Bugs can be coding mistakes. They can be mistakes in the way we put our code and somebody elses code together to solve a problem that is ours to solve. They can be errors in how the user thinks she should use the software, in what order or in what pace. We're delightfully unanimous within my teams developers and project managers that it would be a complete waste of time to stop and argue on whether the bug is in requirements, specifications or the code, but we need to address all of them within our limited resources towards the benefits for the users and our business. We don't stop to define if this bug is a change request or defect, and whether it was created this increment or outside some guarantee period. In my team, I'm a valuable member helping the team understand the types of problems we have, and finding problems rather earlier with respect to the segment of customers who find this particular issue a showstopper for anything they are willing to do with us.

Product development is not easy, since there's usually too little time invested in relation to the hopes and wishes that could come true. We're trying to understand the masses, and we have contacts with individuals. But all the challenges I face here, are fun as things go somewhere. So all this as an introduction.

Customer - Contractor projects

Before joining this company, I spent a four-year period in actively trying to understand the customer-contractor projects and testing within them. I worked as an individual subcontractor for contracting organization first, until I moved to take a day-to-day job within the same segment, pension insurance, in a customer organization. In the customer organization I was assigned to support acceptance testing and to define the testing we expected of the contracting organizations. These project's have some weird characteristics of the context that make life harder for testing:
  • A lot of time and energy is wasted when customer and contractor compete in proving that the problem that must be fixed is defects and not change requests. Defects, as per contracts, are something the contractor fixes within the amount already due on the work done, and change requests are paid separately by the customer. I find it amazing, that you can contract a pension calculation system, that doesn't calculate pension, but works "as specified". In this model, as the customer, you pay separately to get the system you intended and the contractor is happy to leave all specification work and responsibility with the customer. Then again, specifying includes a risk, and some may argue that this risk has not been paid for, but the sum due would and should be higher if it was.
  • The customer's testing phase, "acceptance testing", is often the first effective testing providing the results to know if the system will work or not for the intended purpose. This phase often happens, due to other delays, in even tighter timeframe than planned. And the planned timeframe was for acceptance, not testing-fixing-testing cycles. In order to actually test for acceptance in acceptance testing, the full scope of testing should have been in place before this. Full scope meaning both same contents and same quality through right skills. If  the previous testing phases are paced with the point I made first, specification worship, we find in the end just before the production date, that the system doesn't do what it needs to do, but does what it's been specified to do. Many things in software development become tangible only through use - and testing.
  • Defects may be included in the base price set in the contract, but contracts rarely take into account that testing in the way where problems are clearly reported to enable the fixes, may not be. Asking for the testing you need requires skills. It's a myth, in my experience, that not tested means not working. I've experienced systems, that have no separate testers, with quality better than many those tested by "professional testers". Usually these cases have a background of software craftsmanship and entrepreneur-spirit. Testing - by whomever doing it with time and skills - teaches about surprising limitations, all of which will never be included in development. So it is not enough to ask the contractor to test. You have to be able to explain and agree what and how, and in what scope and quality. Low quality testing is testing too, and some people call it testing that you press same buttons in same order over and over again without any rationale that could explain that this may be a good use of the limited budget. With the same use of time you could at least vary things, just a little, and make a difference for the results potential.  Also, many customers, knowingly and deliberately, buy their projects without testing cutting the price 20-30 %, agreeing that the customer will deal with testing and all of a sudden acceptance testing starts with a nonworking system that none looked at integrated.
  • Competitions on the contracts between contractors brings in interesting side phenomena. I find it really hard to make the offers comparable in the contained scope, and contractors ride with their own models to bid for projects clearly under priced, where the actual cash cow is the fixes and changes after the first delivery, billed separately. The first delivery is actively minimized during the project, knowing that change of contractor is not an easy task and continuation with the same one is likely. The contractor may not be able to bring up the hour rate as many customers ask for that already in the first bid, but nothing sets limits to how many hours it takes to do a task in the contractor organization. And hours go up easily when your model includes that you should have separate specialists for talking with customer (requirements), solving the needs (specifications), designing how this technically can be done (architecture), how the change should be done in detail (technical design), how it is implemented (coding), how it's tested technically (integration testing), how it's tested as part of the system (system tester) and who talks with the customer about the schedules (project management). And if this is any bigger, we have teams of specialist, who need someone to direct the teams. Sometimes it just feels we're overspecializing, but this makes sure the amount of hours continues to surprise the customers.
  • The atmosphere of fear and distrust costs a lot for the customer, who is eventually paying. It is, by no means good that the contracting organization would monitor whatever they are doing, and for example be responsible for the test automation that would support the future development (and when you bypass that in the first project, it creates a nice bunch of billable regression testing hours for the future). Any relevant testing activities are expected to be done somewhere else, preferably by some "independent" party that is another contractor in it for the billable hours. When in past I worked for a testing contractor as test consultant, the sales organization sold several fellows to my more recent organization to create test automation to support acceptance testing. It never did more than give a warm fuzzy feeling, maintaining it costs a lot as it broke all the time.  It did not live very long. And having worked in this customer organization, I know the biggest reason for failure was distance. With the same investment given to the organization who was already on the project, there would have been much better chance of success. They might have actually used the automation, and it could have been a part of the criteria of one delivery. I don't get the point of creating and paying for the same testing twice by two groups, with distrust as the rationale. There are other, cheaper ways build trust, and lack of trust makes the overall project fail. You can build trust by agreeing on mechanisms in the contracts, but eventually, it goes down to people and collaboration. Organizational distance between two organizations tends to require the contractual safety net for the tight spots, where the business expectations may not match.
  • External testing organization often make things worse. If two organizations have different business models and goals, the third, focusing in testing may not make it easier. If tester shows her value with the testing results and development contractor loses money in fixing problems without separate pay, and both try to optimize the significant role for themselves, I tend to see chaos and fighting.
After this long rant of problems, I wanted to mention there's also been some good experiences. In projects for my past employee, I had one project (I worked on 5 in 2,5 years up to production) where we did a professional 30 day acceptance test. Our testing team did all we could to find defects and change requests. We failed. System went to production ahead of schedule and under agreed budget. Key to this success was, I would claim, was the excellent collaboration between different roles in customer and contractor organizations. In the testing weekly meetings we used 30 minutes to openly discuss the risk and revealing specific fears of something being wrong. With the collaboration, the contractor, supported by the customer, did a complex change in an existing system, contractor testers tested it as part of the delivery and had got things fixed communicating with the developers on the technical system details. In the background, what made a difference was the setting of the project: project was sold to us at a price that paid the hours and made the margin goals, and allowed the contractor to focus on the essential. A sister project, which had double the scope was sold at 2/3 of the price. The under priced project required significantly more steering (fighting) effort and the work atmosphere never turned to the productive good level the other one had. There was also a significant difference in testing and negotiation skills for the named representatives in the contractor side.

A buggy system can be a money making machine and be better for the contractor's business financially, as long as this is the way of the trade for all large contractors. I find that the methods the large contractors use optimize this. But the customers are to blame too. What if customers would settle for time-and-material contracts, and that you can't allocate the risk of your software to some other organization without a significant added cost, and would place more checkpoints - actual incremental deliveries - more often in the calendar. And this still would allow the customer and the contractor to agree on a goal price, and a reward/risk money, that would be used for the flexibility to get the right system. Sounds to me a lot like a suggestion towards agile methods - especially when put together with the pain of too many people for too much specialization to avoid the actual responsibilities.

Wednesday, August 8, 2012

Not testing? Not a tester?

I met up with a few colleagues last night, and after we had the official business mostly taken care of, we chatted briefly about what's going on at work. We don't work in the same companies, so the stuff we do tend to vary a lot.

I told I'm working, in addition to learning the product and finding problems on the side (=testing), with specifications, namely specification by example -style specifications. It just happens that for a product with a lifecycle, it's not a particularly good idea to NOT have a specification, and I refuse to write test cases that are separate since I've kind of bought in to the idea of living specifications. As this is work done for features that already exist, I do the spec based on what I learn when testing and do workshops with the product managers to go through if they're balanced examples instead of tests that cover all.

One of the colleagues pointed out, that what I'm doing isn't testing - I go into someone elses territory. In the next sentence he pointed out that actually what I'm doing is like creating test cases for someone else to execute, since most of the under-the-hood automation I create with the coding work done by my team's developers.

About a year ago another one of my colleages who I also met yesterday told me I'm no longer a 'normal tester'. Apparently that means, that I've earned my place and get invited to board meetings to help understand the information testing provides. And they actually listen.

I still think I'm a tester - at heart. I've put loads of hours in understanding what I could do, what I could have others do, and how to get just a little better at explaining what I'm doing and why. It's funny how upset I get when I hear that what I do is not 'normal' or that I'm not a tester. It just shows how important the craft is to me personally.

Tuesday, July 31, 2012

Not that difficult - or is it?

I tested a little new feature today. It did not take many hours to implement, but ended up taking more time test. The feature itself seemed to work nicely - in isolation, where it will never be. Investigating the feature in use within the product revealed that while it worked, other more relevant didn't after you had used this one. All in all, a galore of bugs in just about every aspect of the product context I could vary. Not only for the developer of this feature, but for the others - as this little addition brought out weakenesses in existing implementation that seemed fine without this.

There wasn't anything difficult I did today. I'm still half-headed from a long vacation and wanted to start off easy. Which leaves me wondering about what we're missing to miss out all the relevant bugs that just take a bit of patience to look for.

The testing I did today did not require much testing skill, but it required patience. It required me to try things that are somewhat similar but still different in some dimensions. I need to work on training patience to my fellow developers. Find motivating for that somewhat hard, as it seems our business has survived long enough without it - you could always, so far, fix the problem when someone complains about it within a day. The hard part is to find out the problems are there and leaving the hard parts for others - product managers, customers or maybe a tester - feels so tempting.

Saturday, May 19, 2012

An evening eye-opener

For Saturday fun, I'm reading Gojko Adzic's book Spefication by Example. Not for the first time, but I thought this time I'd try read it in order intended, instead of the usual jumping to the most relevant first.

On the very beginning of the book, describing key benefits of Specification by Example, there's a quote. Paraphrasing: In other organizations in the end testers find something wrong with the product and developers need to continue, and in ours they don't. The issues that the team have are about making a test (automated) go green, not later feedback surprises.

I got stuck on this particular out-of-context quote, namely because it, unintentionally, resonates with something I'm experiencing. Not specification by example and hitting the right target, but being in an organization where the issues used to be "tests not becoming green" and now there's definitely more churn - going back and forth.

To illustrate my story, I'm sharing a picture from our Jira that I've been looking at recently, trying to figure out what to do next.
Something happened a while back. Something that makes our software development look a lot less effective. They got their first tester that focuses on testing. They had at some point before a tester who focused on test automation, and with that, the wrong kind of test automation, so much of what I'm showing come as surprises.

It's not that you couldn't use the product. Apparently you can, since there's customers using it. And apparently they can't be very unhappy or they would complain in ways our people would understand. I'm not quite sure what's happening there.

But I know what is happening testing-wise. I started testing and logging my issues a bit before out latest release. And the stuff I'm finding for addressing is more than the fixing / development work of our whole team.  The problems are not about missing requirements per se, but about surprising side effects of two supposedly separate things. They are problems of shortsightedness about things that can go wrong and believing you'd be luckier than the others. And not knowing hides it all.

I had organization, who could have no exploratory testing and no churn. No real information either. Having e.g. me work on anything like specification by example & test automation now would seem like the wrong choice, as they need quick eyes open effectively reported -testing first. But I still kind of hope to get to that too, to turn the trend around for better.

Monday, April 16, 2012

Why would we improve the way we work?

I had a short, but inspiring discussion at work today with some of my team's developers. Since it's apparent someone for some reason wants things to improve, they were asking for set goals and better understanding of where (and why) we're heading. Apparent for the reason that they just recruited me, but still not so clear since while they may now invest a little on "tester", they still haven't invested visibly into fixing.

I gave a short answer as I see it: there's a tradeoff in the way they develop. When we deliver "more short-term business value", we take "quality/technical debt". We're approaching a point where the debt and the interest of the debt are threatening our ability to deliver features, and we are unwilling to scale the team size but look for other solutions. Our challenge is not about us wanting to do things in ways that make sense, but about the effort and timeframe we need to invest to get where we want to be.

Another one of the developers who wasn't part of the discussion as it started, suggested from his cubicle that this could actually be what we'll clarify and discuss with the product management team. Didn't test that yet - will soon. This is a very simple and basic approach, but today reminded me how many times it has been effective. And most likely, I will learn more layers on what we actually find relevant for our product to become a better tester for it when talking about this.
 

Friday, April 13, 2012

If it looks too easy to be missed by developers...

On my first weeks at new job, I've had the pleasure of reporting bugs again. I find this particular result of testing to give me the feeling of achievement. The more relevant the problem, better.

There was one bug I reported on Monday, that just looked too easy to be missed by the developers in my team. As I originally reported it, the problem was that when logging in with one of our three main browsers, there's a highly visible error message. And that this seems to happen only with the recent builds, not in production version.

In the end of the week, I quickly asked in passing the developer whose component was failing, if a fix is available in the next weekly build. He seemed puzzled: what problem, what fix? I checked our Jira, and the issue had not been addressed - which is quite normal. He took a quick look at it, and came back with "I didn't change this for ages" with some details.

I started testing the issue more with the information from him. With fresh eyes, I realized I entered the program from a bookmarked link - something I hadn't mentioned in my original report. I also realized, that I had different addresses bookmarked in the other browsers. So I had missed a relevant bit of info I provided now.

Bottom line: if it looks too easy to be missed by developers, it may be that they didn't test, but in this case, I missed relevant factors that are needed to get the bug visible.  Talking sooner than later to the very busy developers is still a good idea.

Wednesday, April 11, 2012

New product, new team, new practices

For a bit over a week now, I've been wondering where I ended up in my quest for hands-on testing work. With hard choices on the way, I'm now working for Granlund, a civil engineering company, with a product that handles building-related data. The domain is something I have little idea as of before, and I'm looking forward to learning a lot on that in addition to tuning and changing whatever I can with my testing skills. We have a small team of less than 10 people, and I'm the first and only tester. Most of my colleagues in development seem to work remotely, but within a week, I've had a change of learning they're just as much fun to work with as I expected.

I start off with a redesigned version of a product that has been around for quite a while. The redesigned version is also out in production, new versions of it going out once a month. With customers actually paying for the product, they must be doing something right, even if they never had testers around.

After reading up on what the product is about with a shallow scan of its documentation, I've worked on:
  • Setting up test management based on sessions with Rapid Reporter and csv-note scanning tool to show metrics I will create - as I won't create test case counts
  • Learning the product's capabilities (and quality) with doing exploratory testing on its main feature areas
  • Existing test suites and redesign of test documentation
  • Redesigning a consultant-suggested testing methodology that I just can't believe would provide added value (unless faking testing is considered that to someone I did not yet meet there)
There's two strong first impressions:
  1. I've got a likely case of "asking for testing, when you should ask for fixing" ahead of me
    I find it somewhat funny that so many people equate testing (quality-related information service) with fixing (making the quality right) and don't think of the dynamics that the added information will bring in. Then again, understanding the business impact and meaning of the existing technically oriented issues is a service I think I can help with. 
  2. As there's not enough rational testing examples around, it's easy to take the basic book on what's a test case and try replicating it without thinking enough
    I've enjoyed reading the attempts to design tests in per-feature-area test suites of varying sizes, but all with step-by-step test cases repeating most of the steps again and again. I took on of these documents, 39 pages with 46 documented test cases and read it through in detail to make a mindmap of mentioned features (I do need a feature list to support my testing). While reading and using the product for learning it in practice (a couple of 1,5 hour sessions), I came up with one-page mindmap mentioning 88 things to test and four dimensions that cause repetition on significant amount of testing that should happen, like different browsers, user rights, and such. Out of the 39 pages, came out 3 things I could not directly deduct from the user interface with little information on the actual product. While doing this stuff, I marked down some (quite many) issues I would write bug reports on - if it wasn't the area we're about to rework on in a significant manner right about now. 
Looking forward to all this - and the chances it provides for writing stuff and providing examples of something that is doable to "just a tester".

Sunday, February 19, 2012

Looking for a new job

I feel excited and puzzled - I'm actively looking for a new job. I've spent last 2,5 years at Ilmarinen, and enjoyed the lessons learned there greatly. We've succeeded, with my projects and teams, to get projects, where the contractor does their share on quality-related work and acceptance testing phase is actually just that: testing, not fixing. We have great business specialists in acceptance testing, they've learned something relevant about testing with me and work with contractors is less cumbersome than before.

So why I'm changing: because I watch out contractors to get to do the stuff I'd want to do with the limited time I have for work. They actually create the quality, their testers enhance the productivity of the teams they work in. They do testing for finding problems so that customers like us don't have to. And I want to be doing that: work close to development, actively making a difference in quality and productivity for the company I work for, and the end users of the products our teams are creating.

So I'm looking for a senior testing specialist position, that would enable me to do hands-on testing work. I'd love to take a spin at test automation, from the extending exploratory (not manual, but brain-engaged!) testing point of view, doing more and using ideas that are not possible without tools. With test automation, however, I know I'd be stronger with a team where I could provide practical ideas of what we'd test, instead of trying to do that by myself alone. The reason is that I have developed a practical way of developing and grouping ideas, and focusing on automation alone may prove to be difficult as the list of stuff that would need to be covered may take more time than first assumed.

My ideal position would be:
  • working with a system or product with relevance: either successfull business-wise or relevant purpose, like with Ilmarinen: there's no pension paid without the success of the chain of systems that make it happen
  • working with a development organization: either product company or company providing software development projects on contract. Essentially, working with people who create the software. 
  • enabling me to do hands-on exploratory testing, develop exploratory testing related practices on smaller and larger scale.
I'm considering both work in Finland and relocating somewhere in the world. But, since life is short and I really want to do more of hands-on testing work than I can in my current position, I would prefer to move quickly. I can work either as a self-employed contractor or as employee. If you happen to know anyone I should talk to, give me a hint.

A tester at heart, always learning - that's what testing is about. So I think I've gotten pretty good at learning to be useful quickly.