Thursday, 10 March 2016

The Lonely Tester's Survival Guide - How to stay fresh, focused and super effective when testing alone

Modern software testing has become agile

Anyone that cares about making good software has moved away from the old waterfall ways of *"throw it at QA when it's finished"*. One recent trend is to embed a single skilled tester within a small development team to test early, test often and add as much value as they possibly can.

In the old days, before test automation was as common as it is today, large numbers of human testers were required to carry out large quantities of laborious repetitive checking. Fortunately, in these modern times, test automation takes care of simple, boring, repetitive checking. This has significantly reduced the need for large numbers of human testers.

So as testing has evolved, the test team has also evolved. A traditional test team was large and it is now much smaller. It's common to only have a single tester working within a small group of developers. At companies where there are multiple testers it is likely that each tester will be working alone in isolation from the other testers. Most companies put different testers on different products or projects and it's a rarity to have two testers testing exactly the same thing.

Now we test alone

When you are the only dedicated tester within a small development team it's easy to start feeling overwhelmed. The responsibility of testing everything and establishing a good level of confidence that it 'works' is on your plate. You may have pressurised people trying to shift some of the pressure that's on them onto you. It's essential to get as many people as possible involved with testing efforts and create a culture within the team where everyone cares about quality.

But even when everyone does care about quality and untested code is not thrown in your general direction, things can still get really tough. You will be staring at the same piece of software day in, day out and trying to constantly generate and execute test ideas which attempt to cover as many paths though the software as possible. Assumption can start to creep in, which is very dangerous. If the save button worked yesterday, is it less urgent to test it again today?

The lonely tester is limited to their own ideas and strategies. Every software tester will test in a different way, with different ideas and different reasoning for those ideas. The lonely tester won't naturally experience any opportunities to learn from other testers. The lonely tester will be missing out on the kind of learning that testers working co-operatively experience every single day. Once a lonely tester becomes familiar with the software they are testing, they will test it in a completely different way to a tester which is unfamiliar with it.

I used to work in very large teams, frequently working with a minimum of at least 6 other testers. Then in 2014 I became a lonely tester. I've learned a lot since making the switch from co-operative testing to testing alone. This is my survival guide written especially for all the other lonely testers out there.

Create as many opportunities as you can to interact with as many other testers as possible

Take charge of your situation and be proactive. If there are testing meet-up or conferences near you, go to them. Meet other testers and hear what they have to say. If you can't attend in person, watch some Youtube videos of respected software testers talking at conferences. Sign up for twitter and follow some other software testers. Search for some blogs on software testing and read them. Start forming your own opinions about what other testers have to say.

You might agree with them, you might disagree with them. It doesn't matter. It's the exposure to other tester's thoughts, experience and ideas which is valuable. The lonely tester, will be lacking this kind of exposure. Slowly you will find that things you have heard about testing will help spark your own ideas about how to test. You can even borrow other people's ideas and see if they work for you.

Join forces with another (possibly lonely) tester

Recently an opportunity came along for me to be less lonely. A new project was due to start which had some similarities to a project I had been working on. I was asked to share some knowledge with the tester due to start work on the new project. So I set aside an hour to team up and do some pair testing.

I have done pair testing before and knew it would be useful for both of us but the experience was remarkable.

It's already known that there are massive advantages in pairing an unfamiliar tester with familiar tester for both parties involved. We have all heard the mantra *"fresh eyes find failure"* (as made famous by lessons learned in software testing). The unfamiliar tester won't be making any assumptions about the system or product and will be more likely to interact with it in a different way to the familiar tester. The familiar and the unfamiliar will both be looking at the software from different angles, from different vantage points. Working as a pair helps keep ideas fresh and stops testing from becoming stale and repetitive.

I was familiar with the software we were testing and other tester was completely unfamiliar with it. We worked together sharing a single keyboard and mouse. I let the unfamiliar tester take control of the software first while I observed, explained and took notes.

I described out loud how the software worked, the purpose of each input box and how they linked together as a whole. As I was describing, the other tester used the keyboard and mouse to manipulate them and started fiddling with the application. The software fell under intense focus and lots of scrutiny was applied. Lots of questions were asked out loud by both of us

"Why is it doing that?"

"Is that what you would expect to see?"

"Try doing this instead, is it the same as before?"

We found a few inconsistencies which warranted further investigation. After 30 minutes, we swapped around and I took the keyboard and mouse. Then, and I'm not entirely sure why, I said...

"Let me show you what used to be broken."

And I started trying to demonstrate some of the previous issues we had encountered, which I knew we had already fixed.

Guess what happened then? The application behaved in an unexpected way and we found a valid bug. It was a Hi-5 moment. The issue was new, it had only been introduced in the last couple of weeks. I knew straight away that I hadn't seen this bug as I was suffering from some kind of perceptual blindness. My subconscious was making assumptions for me. My sub-conscious had been overlooking areas I had recently tested and observed working correctly.

I learned a lot during that hour. I learned that no matter how good the lonely tester is at testing, a second opinion and someone to bounce ideas off is essential. Afterwards we both agreed that pair testing was an activity which we should continue doing.

As a lonely tester you may be able to negotiate an exchange of testing with some of the other lonely testers within your organisation. At an agreed date and time set aside a window to spend time with another tester. Allow them to come test with you by your side. In exchange agree that you will do the same and sit and test alongside them.

Every tester tests in different ways and through pairing with others we can learn different approaches to testing. Through these sessions we can learn about the existence of things that we were previously unaware of such as tools, tips and tricks. Having someone to discuss ideas with will help keep your testing fresh and you will learning about different testing styles.

Share your experiences, ideas and problems with others.

The lonely tester receives less feedback than testers working in small co-operative teams. When a lonely tester tests, the testing ideas happen in their head and they apply them to the software. This process is completely invisible to other people. No-one can question, critique or give any kind of feedback about invisible testing. This is especially true if the lonely tester is weak at recording or documenting the testing they have performed. So the lonely tester needs to make sure they are communicating well with others, at all levels.

If you are lucky enough to sit with the developers on your team, talk to them about the testing you are doing. This might be as simple as a casual conversation about what you are testing, what you have observed or if have got stuck and aren't sure how to test something specific. Developers genuinely care about testability. They want to make it as easy as possible for you to test their code and they might have some ideas that can help. Suffering in silence is the worst thing a lonely tester can do. Don't sit on your problems, talk about them.

If there are other lonely testers in your office, engage them. Talk about what, when, why, where and how you are testing around the water cooler. Share stories about testing on forums, tweet about testing or start a blog and write about your testing experiences. When you are a lonely tester, this sharing your experiences is essential so you don't end up facing really hard problems alone and are able to get helpful feedback that you can react to.

Above all else, the most important piece of advice for the lonely tester is this...

If you don't want to be alone any more, you don't have to be.

This post was also published on my company's blog Scott Logic Blog

Monday, 8 February 2016

Data Mocking - A way to test the untestable

Some of the biggest challenges when testing software can be getting the software into some very specific states. You want to test that the new error message works, but this message is only shown when something on the back-end breaks and the back-end has never broken before because it always "just works". Maybe the software you have to test is powered by other people's data, data that you have no direct control over and you really need to manipulate this data in order to perform your tests.

Imagine you are testing a piece of software which displays the names of local businesses as values as a drop-down list.

This software might look something like this...

There are only three items on this list at the moment, but this may not always be the case.

There is currently no option within the software itself to change or manipulate the text displayed on the list because the software retrieves this list of data from someone else's API. We have no control over the data returned by the API, our software under test just displays it.

You have been asked to test the drop-down box. What would you do?

Well most testers would start by looking at it. It appears to work correctly. Items can be selected, the Submit button can be clicked. But how would this drop-down behave with a different set of data behind it? Well we don't know (yet) but it is possible that it could appear or behave differently.

One solution which would allow more scenarios to be tested would be to force the drop-down list to use some fake made-up data. This approach is commonly referred to as testing with mock data or simply "mocking".

Mock data is fake data which is artificially inserted into a piece of software. As with most things, there are both advantages and disadvantages to doing this.

One of the big advantages with mock data is that it makes it possible to simulate errors and circumstances that would otherwise be very difficult to create in a real world environment. A disadvantage however is that without good understanding of the software, it will be possible to manipulate data in ways which would never actually happen in the real world.

Let me give an example. If an API is hard-coded to always respond with 0, 1 or 2 as a status code and you decide to mock this API response to return "fish". As soon as the software asks "what's the status?" and it gets the reply "fish" it might explode because it wasn't expecting "fish". Although this explosion would be bad, this might not be a really big problem because it was your mock data that caused the fish explosion and "fish" is really not a valid status code. You could argue that in a real world environment this would never happen (famous last words).

Mocking is essentially simulating the behaviour of real data in controlled ways. So in order to use mock data effectively, it is essential to have a good understanding of the software under test and more importantly how it uses its data.

To start using mock data the software under test needs to be "tricked" into replacing real data with fake data. I'm sure there are many ways to do this but one way I have seen this successfully achieved is through the addition of a configuration file. This configuration file can contain a list of keys and values. The keys being paths to various API end points and the values names of files that contain fake API responses. The application code is told to check the config file and if it contains any fake responses to use those instead of the real responses.

Collecting data to make mocks from is a fairly straight forward process if the application can be opened inside a browser. Opening the browser developer tools (f12), inspecting the Network tab then interacting with the software (i.e.. changing the value on the drop-down box) will usually reveal API requests made and display the associated response received.

Let's continue with the example of our software which displays the names of local businesses as values as a drop-down list. To keep things simple I'm going to say that this software uses a REST API with the following request and response.

A request URL might be:

https://www.somecompany.com/api/business/names

And a response might be:

[{"id":"0000001","name":"Tidy Town Taxis" },
{"id":"0000002","name":"Paul's Popular Pizzeria" },
{"id":"0000003","name":"Costalotta Coffee Shop" }]

So to set up some mock data for this app, we could copy and paste the response into a file and tell the software to use that data instead of the data at the real API endpoint.

And this is where the fun begins. Once the software has been tricked into using mock data we have direct control over the data used by our application and we can start manipulating it.

If we wanted to test what happens when the list has many values, we could just change the mock data by adding more values to the file so it looks like this...

[{"id":"0000001","name":"Tidy Town Taxis" },
{"id":"0000002","name":"Paul's Popular Pizzeria" },
{"id":"0000003","name":"Costalotta Coffee Shop" },
{"id":"0000004","name":"Hey guess what, this is fake data" },
{"id":"0000005","name":"And this is also fake data" },
{"id":"0000006","name":"This data was made up" },
{"id":"0000007","name":"But the app thinks it's real" }]

Once this new mock is fed back into the application, it might look something like this...

When there are 7 items on the list, the contents of the list now covers the Submit button. We may also find that application performance is degraded when a larger number of items are displayed.

It is now possible to test lots of new ideas. These could be things like...

  • Many values
  • Duplicate values
  • Long strings
  • Short strings
  • Accented characters
  • Asian characters
  • Special characters
  • Alpha-numerical values
  • Numerical values
  • Negative numerical values
  • Blank values
  • Values with leading spaces
  • Values with multiple spaces
  • Reserved words "NULL", "False" etc.
  • Code strings
  • Comment flags e.g. "//"
  • Profanity
  • False positive profanity e.g. "Scunthorpe"

Test ideas are now only limited by your imagination, not the application!

Mock data can also be used to see how an application handles API responses which are not "200 OK". We can start testing error states by tricking the software into thinking the API end point returned an error when it didn't. Testing error handling becomes especially important when the software reacts in different ways to different types of errors which can occur.

Imagine an application that handles each of the following error codes in a different way:

  • 400 - Bad Request
  • 401 - Unauthorised
  • 404 - Not Found
  • 408 - Request Timeout
  • 500 - Internal Server Error
  • 503 - Service Unavailable
  • 504 - Gateway timeout

It would be very difficult without mock data to force each of the above error states manually. Testing error handling is where mock data really shines and becomes a very powerful tool.

If you're looking for ways to improve the 'testability' of applications that you are building, consider adding a way to launch the application using mock data. You might be surprised how creative testers can be with data and you could start to spot issues that otherwise would have been missed.

This post was also published on my company's blog Scott Logic Blog

Monday, 18 January 2016

Know your bugs - 6 Annoyingly awkward bug patterns that every tester should know.

As Software Testers, we have to frequently imagine the unimaginable. Through experience we learn, adapt and prepare for the next time we encounter similar circumstances. Recently, I found a particular annoyingly awkward bug and was able to draw from experience to not only identify but also explain very quickly without too much investigation why this bug was happening. I was able to do this as I had encountered an almost identical bug a couple of years previously. This made me start thinking about some of the trickiest bugs I had ever seen and their root causes. How common certain "bug patterns" were and how I would approach the symptoms of one of these "bug patterns" now compared to the first time I saw them.

I decided I was going to try document the behaviour, symptoms and causes of these bug patterns. Then I gave them silly names to help remember them.

1. The needle in a haystack bug - This is a rare bug which only occurs in a single very specific circumstance, but it avoids detection as it hides among thousands upon thousands of other circumstances which all work correctly. Imagine an input which accepts a value from 1 to 9999999 but only breaks if that value is specifically 4528183. These bugs tend to be stumbled upon accidentally and are generally found through a mixture of exploratory testing and pure blind luck.

2. The positively helpful bug - This is a friendly bug that instead of causing something to break, it makes something work exactly as was intended. No-one has spotted it exists because nothing appears to be broken. The positive helpful bug has been there for a significant amount of time. It keeps everyone happy by making software do exactly what it's supposed to do. Until one day it is unexpectedly removed. Someone saw the bug while they were re-factoring the code and killed it. Now the positive helpful bug is dead, all the code is broken and no-one can easily see why.

3. The crouching tiger hidden bug - This is actually a combination of two bugs. The first bug is usually some kind of logic bug which prevents the code containing the second bug from ever executing. It's only when the crouching tiger is fixed, that the hidden bug is revealed.

4. The longest journey bug - This bug usually makes it's debut appearance towards the end of a long day spent performing exploratory testing. It appears once and only once. That is until a few weeks later, when it makes its encore appearance and no-one can work out why. The longest journey bug is essentially a bug that is tucked away at the end of a very long path. Surprisingly even though on the surface it appears to be unreproducible, it's actually 100% reproducible. Only unlike a regular bug, the number of steps which must be followed to recreate it are in the hundreds or thousands. An example of a longest journey bug would be software which gradually increments a value as the software is used, until this value reaches a size that is just too big for the software to handle at which point the bug manifests.

5. The all the planets are aligned bug- This an especially rare bug that is conditional on a number of factors which rarely occur simultaneously, all occurring simultaneously. Imagine a date that only breaks when both the day and the month are 9 characters long. You would only ever see a problem with it on Wednesdays in September. Just like when all the birds stop singing during a solar eclipse, these bugs can feel quite weird when you experience them for the first time. Incidentally, the 19th January 2038 should be a fun day for anyone working with software that stores dates as 32-bit integers.

6. The far too obvious to be a bug, bug - This is the bug that doesn't make any attempt to hide. It is and always has been in plain sight. Everyone on the team has seen it every day for the last 6 months. But for some reason it's never been reported as a defect because seems "far too obvious" to be a bug. No-one says anything because "If that was a genuine bug, someone else would have already reported it by now". It usually takes either a confident experienced tester to challenge the far too obvious bug or a naive new starter that is looking at the software for the first time to find these kind of issues.

This post was also published on my company's blog Scott Logic Blog

Thursday, 12 November 2015

If you have to automate IE10, avoid the Selenium 64-bit IE Driver at all costs.

When it comes to testing anything in a browser, Internet Explorer tends to have the reputation of being the black sheep of the browser family. Anyone with any experience of testing know that there is a greater chance of something being broken in Internet Explorer than any other browser. Let's face it, IE doesn't have a great track record. As software testers, we all remember the pain of having to support IE8, IE7, IE6. We also remember the moments when support for certain versions of IE was dropped, along with the subsequent wave of euphoria upon realising we no longer had to test in them. But I digress, new versions of Internet Explorer come along, like buses, to replace the older versions, which we eventually drop.

Internet Explorer 10 is currently a supported browser for some software that I test. I follow the widely accepted practice of writing automated tests to do the repetitive grunt testing work so that I have more time to test the complex bits (that can't be automated) by hand.

I've been running automated tests in IE10 happily using the 32-bit version of Selenium Internet Explorer Driver for quite some time. Until this morning. This morning, everything broke.

Well I say broke, what actually happened was that all the tests that used to take 5-10 seconds each to run, suddenly started taking 2 -3 minutes each to run! I watched some of these tests running and I saw that the IE driver was mysteriously typing text into all the text boxes very, very slowly. It's speed was comparable to an asthmatic snail.

So what changed? Well, a bit of investigation revealed that someone else had 'upgraded' the test suite to use the Selenium 64-bit Internet Explorer Driver from the usual 32-bit driver.

But why would this cause everything to break so horrifically in IE10?

Well, in IE there is a manager process that looks after the top level window, then there are separate content processes that look after rendering the HTML inside the browser.

Before IE10 came along, the manager process and the content processes both used the same number of bits.

So if you ran a 32-bit version of IE you got a 32-bit manager process to look after the top level window and you got 32-bit content processes to render the HTML.

Likewise, if you ran a 64-bit version of IE you got a 64-bit manager process to look after the top level window and you got 64-bit content processes to render the HTML.

Then IE10 came along and changed everything because it could. In 64-bit IE10 the manager process was 64-bit (as you would expect) but the content processes, well they weren't 64-bit any more. That would be too logical and sensible. The content processes remained 32-bit. I think the reason they didn't change the content process to 64-bit was to try keep IE10 compatible with all the existing browser plug-ins.

Anyway, part of IE10 (the manager process that controls the top level window) is 64-bit and the rest of it (the content processes that render the HTML) are 32-bit. Now this might seem a tiny bit crazy because a Windows a 32-bit executable can't load a 64-bit DLL and vice-versa, a 64-bit executable can't load a 32-bit DLL. This is the very reason why there was a separate 32-bit and 64-bit versions of IE in the first place!

So what was actually happening to my tests when they were using the 64-bit Selenium Internet Explorer driver?

The tests were sending key presses to the browser. The sending of a key press is done using a hook. The IE Driver sends a 'key down' message followed by a the name of the key, followed by a 'key up' message. It does this for each key press. Because the way these messages are sent is asynchronous, the driver has to wait to make sure that the 'key down' message is processed first so that the key presses don't happen out of order. The driver does this by listening for the 'key down' message to be processed before continuing.  

In 64-bit IE10 the hook can be attached to the top level manager process (because that part is 64-bit) but the hook then fails to attach to the content process (because that part is 32-bit).  

So the 64-bit manager process sends a key press, then listens to hear whether or not the 'key down' message was received by the 32-bit content process. But because the 32-bit content process can't load a 64-bit DLL, it never responds to say "Yeah I've dealt with the 'key down' you sent". Which means the manager process times out waiting for the content process to respond. This time-out takes about 5 seconds and is triggered for every single key press.

The resulting effect is that the IE driver types 1 key every 5 seconds. So if your test data contains fancy long words like "inexplicably" it's going to take a whole minute to type that string in. You know your automated tests are seriously broken when a human can perform the same test in less time than it takes the test script.  

This issue is at the heart of the Selenium 64-bit Internet Explorer Driver and is certainly never, ever going to be fixed. Especially given that Microsoft intend to discontinue all support for legacy versions of IE from January 12th 2016 

Fortunately I was lucky and the work around in my situation was simply to roll back to using the 32-bit version of IE Driver. 

Beware the Selenium 64-bit Internet Explorer Driver. Apparently it can't handle taking screen-shots either for exactly the same 32-bit trying to use 64-bit reason.

Tuesday, 20 October 2015

Automating bacon sandwiches

I've recently been lucky enough to be involved with a new software development project from the very start. One of the advantages of being the first Test Engineer on the project was that I was able to help implement and set up test automation on the project from the very beginning. Frequently software development projects see test automation as an after-thought and try implement it later, when the software is already quite advanced. This results in automation efforts that are always trying to 'catch up' to development which can significantly increase the amount of time consuming manual testing required.

I have recently been reading Experiences of Test Automation by Dorothy Graham and Mark Fewster and found this book to be fantastic. It contains many case studies and lets the reader share the experience of how other teams handled test automation. It explains not only what went well but also what went wrong.

Some of the test automation challenges we have already faced on my new project include:

  • Ensuring automated testing is included as part of each user story and completed for every release.
  • Ensuring that each automated test runs independently of other automated tests so that when a test fails it can be run alone and the failure observed in isolation.
  • Challenges surrounding running automated tests in the cloud in different browsers.
  • Challenges about what should be an automated unit test and what should be an automated ui test and avoiding duplication of effort between each level of automated testing.
  • Challenges involving moving automated test code between repos
  • Keeping the test suite as "unbrittle" as possible to ensure test failures are worthy of the time spent investigating and debugging the tests.

It's fair to say that test automation on any project is a full time job which requires a significant amount of effort to implement and maintain. Automated tests are code and as such they should be subject to the same rules already applied to development code e.g stored on revision, code reviews for pull requests etc.

Every Friday morning in our office a company-wide bacon sandwich order is placed. Yes, I know this sounds awesome. It really is awesome.

The process for this bacon sandwich order is as follows: An email is sent with a link to a form where orders for bacon sandwiches are collected. The cut off time for placing an order is 9am. The list of sandwich orders are emailed to a local sandwich shop which then start preparing the sandwiches. One person who has placed an order is then chosen at random (using a random number generated at https://www.random.org/) to collect the sandwiches. A second email is sent with the name of the person who is collecting the sandwiches that morning. Everyone takes their sandwich money over to their desk and pays. The sandwich collector then goes and picks up the sandwiches, which usually arrive around 10.00am.

When deciding which tests to automate, one criteria commonly used is to identify simple repetitive tasks that are performed often. This morning while completing my bacon sandwich order form I realised that this was a relatively simple task that I repeat each Friday morning. As so much test automation activity had been going on recently on my project, I decided I was going to attempt to automate placing my bacon sandwich order in the simplest way possible.

I always order the same sandwich (bacon and egg on ciabatta). I looked back through past emails with the link to the web form and saw that it was rare for the url for ordering the sandwiches to change. Because I wanted to automate this task really quickly, I only had 15 minutes until the cut off time I knew from experience the fastest way to do this would be using Python and Webdriver.

So this is what I did:

1) Downloaded and installed Python 3.5.0 from https://www.python.org/downloads/

2) Added Python to the PATH environment variable. This was quite easy to do I just went to the Advanced tab of System Properties on my PC. Then I clicked the Environment Variables button, edited "PATH" and typed ;C\Python27 on the end of the string.

3) Opened a Git Bash terminal window and changed directory to /c/Python27

4) Installed Selenium by typing "python -m pip install selenium'. (Note: Pip is the package manager that Python uses to install and manage packages. The "-m" stands for "module", not "magic". )

5) Opened IDLE (Python's Integrated Development Environment) from my Start menu.

6) In IDLE, selected File > New

7) Wrote the following basic script.

8) Then saved this file as bacon.py inside the folder c:/Python27

I tested this basic script by typing "python bacon.py" into the Git Bash terminal window. What happened then was a Firefox window opened up and loaded http://www.python.org.

Excellent! I now had a very basic browser automation set up and running on my pc. I set about writing the script which was going to order my bacon sandwich.

The first thing I did was modify the url in my script to open the url of the bacon sandwich order form. Our actual order form is a public URL so for security reasons (we don't want the internet ordering a billion bacon sandwiches through our order form next Friday) I am going to use http://www.bacon.com as the url in my example to protect the identity of the actual sandwich ordering form.

The next thing that the script needed to do was click on the text input box for name and type in my name.

This was the name input boxes element tag on the ordering page.

The easiest way to locate it was by its id.

I added this to my script....

I saved and ran my script again. It opened Firefox, navigated to the ordering page and typed my name into the input box.

The next question on the form was 'Can you collect the sandwiches today?' and underneath this question there were two radio buttons labeled "yes" and "no".

The "yes" radio button's element looked like:

And the "no" radio button's element looked like:

As the ids were unique, for simplicity I decided my script was going to click on the ID for "no" as I was busy this morning with meetings at 9:30am and 10:30am which would prevent me from collecting the sandwiches.

By the time I finished writing my script it looked like this...

I ran my script and approximately 2 seconds later, my order was automatically placed. I really liked how quick and simple it was to implement and run a script that performs a simple task. Now every Friday all I have to do is type "python bacon.py" on command line to place my sandwich order.

Sometimes it's not necessary to apply layer upon layer of fancy testing frameworks, use complex IDE's and hide code behind abstraction (through page object model etc.) to automate simple tasks. Test automation can be simple and still be effective. It is much better for a project to have a small amount of simple curated automated tests than to have no automated testing at all. Don't forget, it's also a really good idea to start writing automated test code at the same time as the application code.

This post was also published on my company's blog Scott Logic Blog

Monday, 24 August 2015

How to develop psychic testing powers when dealing with software that has no requirements.

Writing good requirements for software development might seem like an easy task on the surface, but it's actually much harder than many people imagine. The two main challenges that arise when writing requirements are firstly, requirements can change frequently. Secondly, even if you manage to capture a requirement before it changes, it's really easy for someone to completely misunderstand or misinterpret it. Good requirements are absolutely crystal clear with no room for interpretation whatsoever. 

So what happens when bad requirements happen to good testers? Unfortunately testing software which has poorly documented requirements is far more common than it should be. It's hard to describe the internal thought processes that take place, but its kind of similar to invoking psychic powers. You have to know detailed information about all the things you possess no knowledge of.

Lets imagine the very worst case scenario, you have been asked to test something that has absolutely no written requirements. None, nada, rien, nothing, absolutely no documentation whatsoever.

Warning! This kind of testing is pretty risky, before invoking magic psychic testing powers you should always inform a responsible adult (usually a Project Manager or the Test Manager - should you be lucky enough to have one) about the lack of requirements and associated risks. 

Some of the most common risks encountered will be:

* High priority bugs will be found late. This is because by the time the person doing the testing gains decent knowledge of how the software is actually supposed to work, time will have passed and release date will be closer.  

* The number of 'as designed', 'working as intended' or 'not a bug' defects will significantly increase as testers start making educated guesses as to what might be a bug. 

* Product knowledge will probably only exist inside the heads of 1 or 2 knowledgeable people. The workload of these people will increase as testers try to extract this information from them. It's very rare for knowledge holders to be available to answer questions all the time.

* Test automation will either grind to a halt or happen very late. How can you write automated regression tests if you don't know how the product is supposed to work? The simple answer is you can't. 

So once you have told the responsible adult in charge about the risks of testing with no requirements they may say something along the lines of ''We can't write requirements because no-one knows how it works.' It could be a legacy product you're being asked to test. It could be that the person that created it left the company without writing any kind of documentation. You may even be told 'We simply don't have time to write any requirements'.

What happens now? Don't panic, I'm going to try guide you through the most efficient pain free way to test the unknown. The following approaches can help maximise testing efforts while also giving the illusion that you have developed some kind of psychic testing ability.

At the most basic level any testing carried out on a requirement-less project will fall into two categories.

Category 1 - Obvious things - I'm certain if I do this, the software should do that.

Category 2 - Mystery things - I have no idea what the software is doing, why it is doing it or even if it should be doing it at all. 

An example of a category 1 obvious thing would be a text input box that says 'email address' with a button below it that says 'subscribe to newsletter'. A fairly safe assumption would be that entering a valid email address and clicking the button will subscribe the email address to a newsletter.

A category 2 mystery thing might be an unlabelled text input box with a button below it that says 'Start'. What is being started? What should happen when it starts? How do I know if it actually started? What should be typed into the input box?

A good tester will explore the software and be able to draw from a number of sources to guess the errors. All the points listed below have all worked for me in the past when I have been expected to test unknown entities.  

* Try to test important and critical features first however without requirement to work from, it may not be immediately obvious which features are critically important. So start with the obvious functionality which is basically everything that falls into category 1.

* Break the software down into smaller areas or sections. Keep track of all the obvious things that were tested in each of these areas and what the results were. This information can be used as the starting point to form regression tests.

* While you are breaking the software down into smaller component pieces and testing all the obvious things, questions will come into your mind about the features that fall into mystery category 2. Compile a list of questions about the mysterious features that fall into category 2.

* All the time you are doing this, rely on your instincts! If something feels like a bug because it's acting in an unexpected way then it's highly likely it's a bug - even if that bug might turn out to just be a poor design choice. Anything that detracts from the overall quality of the software should be considered a bug.

* Seek answers to the mystery questions. How does the functionality compare to the previous version or to a competitors product? These insights can give valuable clues as to whether or not something is working correctly. Learn as much as possible about the product's functionality from reliable sources.

* Ask developers how they expect the software will behave. If you don't have any requirements to test against, it's likely your developers didn't have requirements to develop against either but they should at least be able to tell you what kind of functionality they added.

* Always keep notes while exploring and learning about the software. Document unguessable things once you discover how they are supposed to work. Trust me, it will save a great deal of time later when you have to revisit complex areas and remember what's going on. There is also bonus value in having notes should a new tester join your project and you need to get them up to speed quickly or if you ever find yourself in the situation where you have to hand over your testing work to someone else.

* If in time doubts arise as to whether or not to log a bug, just log the bug. Once it is entered into a defect tracking system people are usually very fast to point out false positives and it only takes a moment to close them down. 

* Try to confirm your test results with anyone that already holds expert knowledge of the product. Remember all your test results are still just assumptions until they are confirmed or denied.

Whatever you do don't give up or get disheartened. While lack of well documented requirements and user stories certainly increases the difficulty level of testing, it certainly doesn't make testing completely impossible. Always do the best you can with the tools and information you have available to you.

Thursday, 20 August 2015

Pinteresting Test Automation - JavaScript Edition

It's been a roller-coaster of a month since my last blog post. In the last four weeks I have successfully managed to change job and learn JavaScript! I started on JavaScript the same way as Python by completing the free codecademy course. If you test things and you want to learn basic programming you should definitely give it a try.

Some initial observations made while learning JavaScript:

1) The learning process was much faster than last time. Knowing a first language definitely helps with learning a second. My first Fizzbuzz in Python took 30 minutes, but my first Fizzbuzz in JavaScript took 3 minutes.

2) White space is not an enemy in JavaScript land. Viva the curly bracket!

3) Forgetting semicolons isn't nearly as bad as I thought it would be.

I've also learned absolutely loads of things about test automation with JavaScript in the last couple of weeks, which is the main reason for this blog post (hooray!).

One of the first things I did was install Node.js which comes with a truly awesome package manager called npm. The package manager made it really easy to try out all of these testing frameworks. Beware if you're on Windows 10 however, some tweaking was required to get it working correctly (Stack Overflow is your friend).

I discovered that there are many different testing frameworks available for writing tests with JavaScript. Actually it's not just testing frameworks, there are many, many JavaScript frameworks in general. Far too many of them. There is a joke among developers that a new JavaScript framework is born every sixteen minutes!

Testing frameworks I encountered and explored were:

* Jasmine

* Mocha

* Chai

* Cucumber.js

* Selenium WebDriver JS

* Nightwatch.js

* Protractor.js

Some of these frameworks are specifically for unit testing, some are for end to end testing. Some depend on each other, some are agnostic and framework free.

I drew a little ASCII diagram to try visualise them. Each framework is listed left to right in a box with either (u) for unit testing or (e2e) for end to end testing. Each framework box has everything it uses on listed underneath it.

These test frameworks increase in complexity from left to right. Jasmine stand alone is a simple unit test framework that just requires JavaScript. Protractor is a more complex end to end test framework that requires either Jasmine (or Mocha and Chai) (or Cucumber) and uses both WebDriver and Node.js

I had a play around with Jasmine stand alone but as this is a unit test framework, I found I had to actually write some Javascript code before I had anything to run my tests against. Unit tests are usually written by the developers that are developing the application. As a Test Engineer, the tests I need to write are a mixture of both acceptance tests, integration tests and end to end tests.

* Acceptance test - Determines if a specification (also known as a user story) has been met.

* Integration test - Determines if a number of smaller units or modules work together.

* End to end test - Follows the flow through the application from the start to the end through all the integrated components and modules.

I looked at Protractor next. Protractor is a testing framework which has been around for a couple of years. I saw that the tests were formatted in a BDD (Behaviour Driven Development, not Beer Driven Development) style.

The syntax protractor uses is based on expect along the lines of 'expect something to equal something else' rather than the more familiar verify/assert statements I encountered when I was writing Selenium WebDriver tests in Python. Protractor's main strength is that it was created specifically to test AngularJS applications. It supports element location strategies for Angular specific elements. If you need to test anything created in AngularJS, Protractor is the King of the Hill.

I then moved on to looking at Nightwatch which felt closer in syntax to the Selenium WebDriver tests I had previously written. Nightwatch is newer than Protractor making it its first appearance on GitHub in February 2014. I found a good tutorial for getting started with Nightwatch which also has a demo on GitHub .

I after a bit of playing around with it, I decided I was going to re-write my Python Pinterest test in JavaScript Nightwatch.

I went through all the Nightwatch asserts and commands and tried to include as many of them as possible in the sample test I wrote.

It was very reassuring to see first-hand that JavaScript and Nightwatch are capable of carrying out all of the tasks possible with Python and Selenium WebDriver.

Anyway here is the test example I wrote with JavaScript and Nightwatch. One of the main advantages I found of writing within a testing framework was that creating the tests was actually much faster. The amount of text I had to physically type in was less than if I hadn't been using a test framework. Also, instead of faffing around with variable assignments a lot of the nitty gritty of what was going on in the background was actually hidden away from me allowing me to just focus on writing the test.