Cambridge Lean coffee | Towers Watson

After not seeing the crowd of happy testers over the hectic Xmas break, a trip to Sawston was a welcome way to kick start 2016 with a drive-by to the south of Cambridge.
The "Testing" started, when I got picked up on my visitor registration badge right after arriving, because I had dated it 2012. Which was a good thing, because if I had dated it 2015, I would have been investigating the occurrence of an "off-by-one" defect.

The "checking vs Testing" did not stop there, but let's crack on.
We covered the topics which I paraphrased badly in order to fit them in a hurry onto the well scoped but limited surface-area of a post-it. We formed 2 groups, so these notes are from Chris Georges' table only.

Why pay to have a tester?

Or rather, at which point do we need a tester? Some companies test in the traditional way - they have automated unit tests and things are just fine for them right now. Some teams if small enough will get by just fine for a while. But without the specialist skills a specialist tester brings, all you have is someone who knows how to check stuff and how to write stuff. A professional tester is a integral part of the team and will be involved with requirements, which design review, and be able to get the correct level of detail in a test plan. You do have a test plan? right?
A professional will have the bandwidth to execute all the testing in the background when the developer is busy trying to fix a large list of bugs 2 days before the release deadline. This might also be called shielding your developers - something your support team might be doing right now already.
Did I mention, testing does not actually stop after the ship-party? Having a person on your team who knows that testing is not 100% about running test cases, but is also about helping you judge risk as well. A dedicated tester allows you to get the right detail level in your QA, because it enables a different perspective.
A good tester is a important part of a team, like a cog in a clock, it's important to make sure it is unique and just the right size for the job.

How do I automate legacy code testing?

It's really hard to do, and I can offer some tips on how to do this using clever instrumentation in ways that does not require code changes all over the place. But the question elicited these responses.
1. Prioritize your testing : P1=urgent , P2=less urgent and so on. This lends structure to what you are doing as well.
2. Be methodical - look at the test script (you have a script, right?) and analyze it for high probability blockers - try to ensure that you run things that can block as early as possible. This lets you push blocker into the developer early and buys DEV more time to resolve a blocker while you test down a different path.
3. Do session based testing. This is going to let you work through a weak test plan and by logging your session you will improve not only future test iteration estimates, and thus be able to time the testing to fit a release closedown. but it will also let you see which sessions and thus which features are the most needing testing based on how many bugs you recorded in a session. Excel is a great tool for recording.
4. Traceability - this is going to come out of the above steps.

Ultimately a deep understanding of what features are dependant on what components of the product will guide you to estimate which areas do not need more re-testing due to simple lower risk. risk is driven almost entirely by code churn. So components with minor change tend to break less- but only because of the interface or environment effects.
My tip on how to avoid re-testing legacy code, is to catalog how the environment impacts the features in the product. If environment plays a big part, study the impact and adjust your plans accordingly.

Specific to automation, instrumentation is an avenue worth exploring as a way to automate testing of legacy code, without touching the code-paths. Maybe I'll write something on this in future.

Which GUI tests should I automate?

Since this is a very common automation question, and the dangers are understood, I'll talk a bit more about ROI.
1. Paring it back , taking a good look at what to not automate by identifying the high priority cover areas
2. What things are hard to test manually, automate those first. What tests deliver most value if automated - things like product install /deploy or launch can be easy to automate and unblock your product development (CI system) quickly.
3. Talk about testing earlier - by forcing devs to thing about testing (manual and automated), you involve dev early, and get them to think more like testers. Make the end-application easier to test also makes end-users lives easier too in many cases.
4. Don't automate unit tests - basically system test, test the behavior, not the code!
5. Don't burn out your testers! Gettign testers to run manual tests all day will drive them a bit nuts, identify tests that drive them nuts and try automate those.
 I used a score-sheet (Excel to the rescue again) to decide when to automate. It looks like this.

























We have 3 dummy cases here
Each test case or "TCD" will have a script (paper or electronic).


 Score each question (criteria) from 1-5. anything that scored less than 5 overall is just never automatable, anything getting over 50 might be, and so on. You get the idea. This screenshot omits the "weighting" applied and a few other "gating" criteria which link in with the PDLC used where I work at the moment (Citrix ltd.) . But you get enough of the picture.






Comments