-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathexploratoryautomation.txt
17 lines (9 loc) · 3.33 KB
/
exploratoryautomation.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Exploratory Automation
Exploratory Automation is not an oxymoron, but it is an often abused turn of phrase. Actually, its a pretty horrible name which leads to the confusion about what it is and what it is not. But like 'Testing vs. Checking', it is the name we have with which to work with.
On one hand it could mean automation that is using some sort of rudimentary AI to make decisions around its interactions with the application in a somewhat Model Based manner. This is a seductive definition to latch onto as a techie but at the same time is misleading. The AI is not really AI is the traditional manner but just pre-programmed set of responses to inputs; there is no learning or built-in curiosity.
Recall that the only reason why we are doing automation at all is to increase our ability to discover new information for our stake holders. Traditional automation is really good at informing that our old understandings and models remain valid but not so good at actually discovering new things. Manual exploratory testing however is really good at getting new information. The blending of the two approaches would then seem a good idea. And it is. Automation Assisted Exploratory Testing is a much better term and strikes at the heart of what Exploratory Automation is trying to get at.
Imagine it is your task to test an application which requires a large amount of setup that needs to happen prior to any testing activity. Things like 'create a user', 'apply and receive a mortgage', 'make three payments and miss a fourth' for instance might be typical in a lending application. This could take 5 - 10 minutes to fill out the necessary screens, correct any type-o's, etc. Do this ten times and you have lost the better part of an hour to an activity that is off-charter. By automating those setup activities though, the setup could be bottlenecked only by the speed of the application itself. And depending on how filled your day is with meetings and other typical distractions this ends up being a significant loss.
Automation Assisted Exploratory Testing is actually one of the places where record-and-playback tools like Selenium IDE shine as the scripts do not need to be overly robust and are not expected to run without someone nearby to babysit them. Using the same example, you could have a suite of 7 scripts (create a user, apply for mortgage, approve mortgage, make a payment (x3), miss a payment) that does all the prep work for you.
Notice how there is not a single script that does all the setup. Instead there is a series of independent, sharable scripts that can be linked in interesting ways to achieve some purpose. And of course, they are checked into and managed through your version control system. If they are not, there ends up being an even larger expenditure of time when everyone needs to 'fix' their personal scripts due to a change in the application.
Taken to an even further extreme, even the use of a spreadsheet application like Excel to keep track of, collate and visualize information during testing could be considered Automation Assisted Exploratory Testing[1]. Which means we, as a craft, have been doing automation in this manner for almost as long, if not longer than in what most people would consider 'automation'. And yet, I don't think I have ever seen 'How to effectively use Excel' on a testing conference schedule.
[1] James Lyndsay via James Bach