Non Timebound nature of Software Quality

Usually, usuallY it is easy to only talk about the times where software fails. And then try to learn from the event. And in a way, events are always messy. A hacker getting into and deleting your work for example is one event that is truly frightening. Should software testing be frightening at all you can hear yourself say. Surely software engineering is a sedate process of correcting wrongs one at a time. Considering the impact of one line of code here, removing another line or condition that no longer makes sense. The line that made sense at the time, but now it no longer does. So an engineer will try also to pre-empt a failure. 
The requirements versus the Yaymo Spec


A software tester will take a higher view, at the behaviour level of the system, and use the oracle of past behaviours memorised, and find newer ones that now offend. We do this by exploring, not by using AI to generate lists of test cases. Especially necessary since if the product itself has an AI core, using any AI tooling would easily become a case of a scholar marking their own homework. The tester in me for example makes mental and paper notes. Suggesting the eradication of an old behaviour will require the tester support their request for a bug fix with some proof. Proof that the product will be better somehow afterwards. But UsuallY only slightly better, because the tester also knows that every fix also has the possibility to break things. Things that are not visible at the surface.

I'm pretty sure they tested everything they were required to at the time. Software does not exist in a vacuum. Time is an untestable dimension to quality. So is water. We only know what we know, at the instant in time, that has usually past. Water can be ice, and it can be steam or fog. Trying to teach an AI something it cannot grasp merely points to needing a human to make domain decisions that decide which domains are really useful.

So companies hire software testers, in the hope they will increase the body of knowledge just in time before the creek floods and their car drives into it and is written off. Waymo probably were unable to make an insurance claim. I'll bet the age old "act of god" clause absolved the insurer. They certainly have a negligence defence. So the tester adds yet another test, can we drive the car into a flooded road. We assume they already test for the Whil E coyote painted road diversion trick, that never nets the wily canyon criminal a meal of road-runner. 

Limit capabilities at the right point in product life cycle

I can only imagine a manager somewhere 20 years from now will ask, why do we have to test for that? It never rains around here. And the manager would be right, at least for the time being. You should also never know all of the defects in a product. That is especially true when a product initially launches. In the early days, testers can easily introduce so much defensive coding, that it can either hobble the vision or exhaust the developers. Imagine if that developer was not a human, but was an AI, the product might then have no friction at all to making bug-fixes which suddenly mean that your boat-car design, may in fact end up only being able to drive on roads because you raised too many water based bugs. And then your amphibious car might later-on fail proper sea-trials. The mind boggles.
Any similarities are intended


Comments