Monday, 23 October 2017

Which Test Cases to Automate

I've decided to blog a bit about my day-job, well, my career and passion really. Software testing, but the kind that makes sense, automated regression. But first and foremost a warning, when a computer validates some software, that's not called testing at all, it's called checking. See here for a bit more on why.

With that out of the way, lets move on. Automating anything is expensive, automating a test case is not a silver bullet, because a computer cannot validate things like a UI layout making sense, or even an API function call rejecting a parameter value either for that matter. Sure we get frameworks that test for UI changes in web pages and in WPF and other forms based applications. But when controls move and we all know that a UI changes cosmetically very often, that incurs maintenance cost. Tools exist to reduce that cost , but they only do so in very specific ways.




Likewise for Automation or Test tooling against public APIs. "Apps" on the internet are the main case here, but this also applies to locally consumed API's not just the web, and all the new kinds of web services available today, its a crazy farm almost. Just about everyone has a web service these days, or at least an API; and since that surface is the really high value customer interface, it's the best place to do testing and best place to automate at the same time. A test script using an API cannot just validate assertions in your test spec, without some skill in writing that script to be maintainable or scale well. Without going into design stasis, arguments about specs, or test frameworks, let's be really Agile; do the parts of the job we can get most early value from first and foremost.

But which test cases do you convert? From that huge batch of analysis done when you looked at this from a manual test perspective last year. The boss wants you to automate all of the things. Now.
Based on my inputs and the way we started doing this, one of the grads came up with using Excel. I'll link the sheet below, but first I'll explain what all of the columns mean.
Test ID A unique test identifier (optional) A pretty basic thing, but very much dependant on your test management tool choices.
Name Description of the test - short is better
Easy to script How easy is it to write a test script for this case. Be pragmatic in your estimate. Give it a 5 if it's trivial , but if it's impossible to, give a 0 (zero)
Manual time cost The amount of interactive human being time it takes to run this test manually, if the test suite has got lots of setup required, count that time separately, only count time for this test in hours needed. If tests tend to take a few seconds to run manually, adjust this accordingly so that quick tests get a 1 and painfully long to run tests get a 5. Also do not count waiting time, if for a file to copy that perhaps takes 30 minutes, that time is not counted.
Data/Table driven Is this a test that really benefits from boundary testing and has multiple simple input output classes that take time to cover most of the meaningful combinations fully if done manually. If Yes, give it a 5 , not really, give it a 1.
Cost of Failure Will it hurt the product and crate risk if this test fails. If Yes, give it a 5 , not really, give it a 1.
Likely to regress Is this case likely to catch useful regressions caused by code or integration changes? If Yes, give it a 5 , not really, give it a 1.
ReUse Is automating this a test now, going to help you write other tests by providing a library of helper functions that will grow your coverage eventually? If Yes, give it a 5 , not really, give it a 1.
Priority Does this verify anything a manual tester would have to do anyway before most other testing commences. Do testers test this functionality very often? In other words, is this a test that has to run early in the cycle. If Yes, give it a 5 , not really, give it a 1.
Some things to notice here, I use a very low resolution scale of 1-5. Higher granularity in the question scales actually end up take much longer on average for a user to guess at a value, and let's face it, this is a game of educated guesses or estimates anyway. It's best if 2 people do this process together. Some of the columns only benefit from entering a 1 or a 5 and nothing in between, some of the columns even allow a 0. As a tester, I'd not expect you not to try entering a 0 in cases where it makes sense to, for these, the sheet will give still you a good answer out. So you can just add these all up, or give each question priorities. In another posting, I'll share the spreadsheet template and the weightings as well.

This only works well after you have obviously captured half a dozen cases and looked at them in relation. Remember, don't spend too much time on each row, this is a planning activity only.

No comments:

Post a Comment