Thursday, June 16, 2016

Insensible test automation comparisons

In an open space conference, one session was playing on playing with combinatorial testing. I missed the session, but heard about it on the hallways. The message was that they found a problem (failure mode) somewhere at thousands of tests, and kept going up with the numbers to have some impressive amount of different tests created, just in this hour-long session where random people just got together on a problem.

Finding the failure mode was cool in my books. The number of tests run, as impressive as it may be, wasn't.

I started thinking back to this, seeing a tweet from #BTDConf stating on a slide that "It takes at least three times the effort to automate a manual test."
Just as the thousands of tests was irrelevant information, this isn't much more helpful. I've seen again and again that when there is a nice seam (api with special testability in mind), it can be faster to automate a test than run a similar idea manually. Then again, what I run manually is never exactly the same. Creating the seam slows things down, but adding more similar tests evens out the investment often quite radically.

It just makes little sense to me to compare stuff on a very general level.

Could people just share some very specific examples, instead of the attempt to generalize (and scare with how hard it is)?

And another thing from my experiences: the thing that is expensive when I do it turns out to be very cheap when pairing with a great developer with specific experience.

Yes, learning is expensive. So let's get cracking on it. All I need is everyone to be just a little better every day. On something. Choose something  you like, something that challenges you. And keep trying when it's hard. You are not alone.