Thursday, 28 January 2016

Why testing fails (the short of it)

I was asked to try take the speaking place of a colleague, and talk on this for CEWT #2. Cambridge Exploratory Workshops in test.
I was initially just hoping to get onto the reserve list, then someone dropped out after I wrote this. So here we go.
Not the kind of thing you want to admit having first-hand experience in when you work for a company falling into the top 100 of almost every desirable list. I'll share my 2 reasons on the topic "Why testing fails".
It's not possible any-more to book a place at CEWT #2, basically because the workshop is limited in size. But if you want to find out more, do get along to a lean coffee morning, just google for the (real) Cambridge meetup "software testing club lean coffee"
To be on 28th February at DisplayLink Cambridge. Contact JamesThomas @qahiccupps .

Rushed Implementations 

“Look before you leap” comes to mind.
  • Features without right hygiene loose in the quality department
  • Feature does not solve customer a problem and becomes harder to test
  • Test not involved early enough

Test Planning 

“Fail to plan, and plan to fail", going around in vicious circles comes to mind.
  • Close-down cycle with no resources planned or budgeted for it
  • Planning impacted by rushed implementation
  • Planning is easier than you think (with good data to support it)

Saturday, 23 January 2016

humble bundle green screen challenge

What's the Humble Green Screen Challenge?
Inspired by FMV games, this event allows you to take a crack at making your own full motion video.

How should you make the videos?
We're providing some sample footage that you can use. All we ask is that you somehow involve that. There aren't any prizes to this challenge, so the rules are pretty darn loose.


YT demo clip:
My demo clip:
A bit like those DVD games where the DVD plays a clip then asks you a question, if you press "left", it plays another clip, if you press "right" it goes a different way. A bit like the make your own story choice skip to page X novels.

Stuff I learned along the way:
How to do Dolby in VideoStudio :
How to get 6 tracks (Dolby 5.1) from a stereo track in Audacity :
The audio results are not great- mostly due to not having any Dolby or surround equipment.

Chroma and background sources:
  • : Alex Free Stock Video Footage - Full HD - Fast Night Street
  • : Alex Free Stock Video Footage - Full HD - Animation - Disco Light 
  • : Alex Free Stock Video Footage - Full HD - Highway - Italy - Monte Carlo - GOPR0255
  • : Ufo Alien Spaceship Fly By - free green screen 
  • : fond vert ovni HD - Greenscreen UFO 1080HD
  • : Free Stock Footage_ Fish Swimming in Ocean Kelp Bed
  • : Galactic Journey in Space - Royalty Free Footage

Scoring time!

How do you rate my clip against some of the other subs?

Wednesday, 20 January 2016

Cambridge Lean coffee | Towers Watson

After not seeing the crowd of happy testers over the hectic Xmas break, a trip to Sawston was a welcome way to kick start 2016 with a drive-by to the south of Cambridge.
The "Testing" started, when I got picked up on my visitor registration badge right after arriving, because I had dated it 2012. Which was a good thing, because if I had dated it 2015, I would have been investigating the occurrence of an "off-by-one" defect.

The "checking vs Testing" did not stop there, but let's crack on.
We covered the topics which I paraphrased badly in order to fit them in a hurry onto the well scoped but limited surface-area of a post-it. We formed 2 groups, so these notes are from Chris Georges' table only.

Why pay to have a tester?

Or rather, at which point do we need a tester? Some companies test in the traditional way - they have automated unit tests and things are just fine for them right now. Some teams if small enough will get by just fine for a while. But without the specialist skills a specialist tester brings, all you have is someone who knows how to check stuff and how to write stuff. A professional tester is a integral part of the team and will be involved with requirements, which design review, and be able to get the correct level of detail in a test plan. You do have a test plan? right?
A professional will have the bandwidth to execute all the testing in the background when the developer is busy trying to fix a large list of bugs 2 days before the release deadline. This might also be called shielding your developers - something your support team might be doing right now already.
Did I mention, testing does not actually stop after the ship-party? Having a person on your team who knows that testing is not 100% about running test cases, but is also about helping you judge risk as well. A dedicated tester allows you to get the right detail level in your QA, because it enables a different perspective.
A good tester is a important part of a team, like a cog in a clock, it's important to make sure it is unique and just the right size for the job.

How do I automate legacy code testing?

It's really hard to do, and I can offer some tips on how to do this using clever instrumentation in ways that does not require code changes all over the place. But the question elicited these responses.
1. Prioritize your testing : P1=urgent , P2=less urgent and so on. This lends structure to what you are doing as well.
2. Be methodical - look at the test script (you have a script, right?) and analyze it for high probability blockers - try to ensure that you run things that can block as early as possible. This lets you push blocker into the developer early and buys DEV more time to resolve a blocker while you test down a different path.
3. Do session based testing. This is going to let you work through a weak test plan and by logging your session you will improve not only future test iteration estimates, and thus be able to time the testing to fit a release closedown. but it will also let you see which sessions and thus which features are the most needing testing based on how many bugs you recorded in a session. Excel is a great tool for recording.
4. Traceability - this is going to come out of the above steps.

Ultimately a deep understanding of what features are dependant on what components of the product will guide you to estimate which areas do not need more re-testing due to simple lower risk. risk is driven almost entirely by code churn. So components with minor change tend to break less- but only because of the interface or environment effects.
My tip on how to avoid re-testing legacy code, is to catalog how the environment impacts the features in the product. If environment plays a big part, study the impact and adjust your plans accordingly.

Specific to automation, instrumentation is an avenue worth exploring as a way to automate testing of legacy code, without touching the code-paths. Maybe I'll write something on this in future.

Which GUI tests should I automate?

Since this is a very common automation question, and the dangers are understood, I'll talk a bit more about ROI.
1. Paring it back , taking a good look at what to not automate by identifying the high priority cover areas
2. What things are hard to test manually, automate those first. What tests deliver most value if automated - things like product install /deploy or launch can be easy to automate and unblock your product development (CI system) quickly.
3. Talk about testing earlier - by forcing devs to thing about testing (manual and automated), you involve dev early, and get them to think more like testers. Make the end-application easier to test also makes end-users lives easier too in many cases.
4. Don't automate unit tests - basically system test, test the behavior, not the code!
5. Don't burn out your testers! Gettign testers to run manual tests all day will drive them a bit nuts, identify tests that drive them nuts and try automate those.
 I used a score-sheet (Excel to the rescue again) to decide when to automate. It looks like this.

We have 3 dummy cases here
Each test case or "TCD" will have a script (paper or electronic).

 Score each question (criteria) from 1-5. anything that scored less than 5 overall is just never automatable, anything getting over 50 might be, and so on. You get the idea. This screenshot omits the "weighting" applied and a few other "gating" criteria which link in with the PDLC used where I work at the moment (Citrix ltd.) . But you get enough of the picture.