Friday, 26 February 2016

Powershell and open-auth / braindump

another /dump.

will take some shape as a go along. Basic problem I want to solve is using powershell to pass authentication through when calling a rest service, but in reality the "passthru" part of it all is a bit of a mystery to me. And feels like it might well be impossible to do in a really secure fashion.

Wednesday, 24 February 2016

I'm Hangry, so I decided to give up strangling people for Lent

Everyone is a tester

I stopped being a software developer about 5 years ago now. Well that's not entirely true, I have always been a tester. In fact everyone tests, but not everyone puts "tester" in their job-title; so if this is you, stick with me a sec.

So lent runs from February 10 to March 24th, and generally people will give up something over that time. It's normally a Christian kind of thing, but in general it's a mindfulness exercise that's good for any religious conviction. Or even if that's not your thing. My thing is not killing people who #$%*@ me off on a daily basis. Most people will be giving up chocolate, they may have tried a Dryathalon, or a "Stoptober", but generally cutting any distraction out of your life helps you. Getting hacked off when someone disappoints you is not healthy, and that is true for software testing as a role.

The Lone Tester

Last night I attended a DOJO Masterclass called "The Lone Tester" by the lead test engineer at Bitly, who has worked solo for much of her tester career. Jess Ingrassellino talks about skills and fresh learnings for anyone who works solo, as a contractor, or who is keen to start learning how to start a career in test or, to shift themselves from Manual into Automated testing. That sounds like a lot, but Jess has done it all in just 4 years, so it's all very well related; her delivery is centred around being the only tester in your organisation or division and having to make your own way. If you are the Lone Tester, Jess gives some tips on how to see yourself as not being alone in reality, since everyone is in fact thinking about quality, but not necessarily an expert. Which is what you are really there for with your tester hat on.
It's not in my opinion a talk with any great revelations in terms of content or process wisdom to any seasoned tester, but she does drop in some pointers for managing your time and workload better and these might inspire the old guard too. To see the talk recording, sign up in the Dojo at Ministry of Test.

Back to the Lean coffee.

I snagged the following topics (actual POSTIT notes visible in micro above with the donkey.) As always my summaries are my words and how I understood them. Everyone in the room hears these same words but takes them on slightly differently. It's called language, and in my case hard-arse.
Performance Testing: Should I test little and often or full-on and infrequently?
  • A few ideas about why the question is the wrong question came out here. long-running stress tests are more like regression tests in many ways and thus carry the same high costs. They also find completely different classes of defect. Knowing this in advance will clue you up on where to go in your strategy
  • You do need both, but understand why first and when first
  • Quick can never be replaced by deep testing, mainly because it can deliver test verdicts quickly and supports your CI process more directly
  • Deep testing delivers more accurate metrics than a quick performance test. But if you apply the same metrics gathering and performance gathering history analysis to your quick testing runs, you will get more value more often and sooner
  • Long running test are best run infrequently in your cycle, and only when you change something that could cause the problems associated with stress failures: Namely; changing the version of any 3-rd party component, major algorithm/architecture change or anything infrastructural that the architects identify as risky
For anyone with time to google around, maybe look for some tips at a recent #TESTBASH by Scott Barber.

I've been asked to test a spreadsheet WTF???
This one came from Alan I think it was, like all simple questions it raised good responses:
  • Sometimes you get asked to test something that you don't really want to test. This severely impacts your reaction and emotions can very quickly prevent you being effective
  • Analyse the business risks, work out the value to business. Then move
  • Gather some stats on how often the "spreadsheet" causes losses, present your findings in easy to consume form like a graph, so that people can understand the risks. Exposing actual size of any risk is your speciality as a tester

How to cope with Context switching and Time management

This was my topic, but more a question, since it's something I'm rubbish at. I was inspired by the tips that Jess Ingassellino shared in her "Lone Tester" masterclass.
  • Time your activities to fit in with natural times of the day, like lunch for people-time, mornings for firefighting, and afternoons for actual core work
  • Plan actual test "session-based testing" for set times
  • sprint, and capture stories
  • Use various todo listing tools

How do I assess my value as a tester?

A topic which Jess co-incidentally also touched on, funny how this kept coming up. Honest guv.
  • toot your horn
  • drive process, management expect you to make process changes that impact quality
  • protect revenue. It's not your job to sell the product. Not your job to fix bugs, nor even to find them. All you have to do is ensure customers don't find them.. well at least not the ones that make them select a different vendors' product
As a tester, you must always be asking questions. First, foremost and often. It's called left-shifting.

In closing; one of the topics not covered (there were a good few) was using "Selenium from absolute scratch". I think a few people are interested in getting a n00bs guide.

Tuesday, 16 February 2016

Test automation sticky-note (sic)

A quick note to make sure I do not loose a little idea I got while browsing recent presentation a STARWEST. The specific presentation I have in mind is here:

Matt Griscom links you to his website and a download for the .NET Framework based tool he created. I believe it warrants a try, because although it glosses over some problem-domain specific areas for me, it seems to take account of a lot of the automation framework gotchas I currently face.
His blog is and the download is hosted here .

Basically I face a problem where my current automation system is not flexible and powerful enough, so requires fragile customization. Stable over the short term, your test code breaks every time the framework revision changes. Which has to happen as you integrate common or shared test-code down into a pattern in the toolstack or into a shared library. All this work needs to be designed and to be planned to reduce maintenance load. Matt seems to recognize many of the related problems I face there, although maybe not solving them, the act of identifying them helps us a lot. Basically he takes the approach "measure everything". Make it easy to mine all data and suddenly you can do comparative and performance testing as well as predictive triage.

So, just a quick note before I forget all about this angle.