Iterative Development, Manual Testing and Frustrations

When i started my transition to iterative development (from Traditional) six years back, my core focus was on how to get the development work into the iterative model, make the developers understand the challenges, accept the reality and work on. We have conducted sessions to developers, what is agile design and how to write code which can adapt to changes.

Over the years, we have spent more time with developers and not with test engineers. I failed to understand the impact of manual testing on iterative development.

The objective of an iteration is to deliver a working software every 2 weeks (or 3 depending on your iteration cycle).

A well-planned iterative project starts from the expectation that people (developers, users, executives) are not very good at figuring out how they will actually use the product, estimating costs, prioritizing features, or anticipating the problems they will actually encounter during development. It is designed to help us manage the risks associated with errors and omissions in our assumptions and estimates.

Even though we start with this expectation that things are going to change very often, i have seen test engineers getting frustrated over the period of time. I have heard test engineers saying the process is not disciplined, there are no process at all and finally agile development doesn’t work for us etc… etc…

I was wondering about this problem for some time before i realized the potential reasons for their frustrations.

We all are human beings. If i write a Statement of Work (SoW) in the same format for few times, i really get frustrated. So, i change the format, i try adding new content etc. The same applies to a test engineer as well. If someone sees the same application every week, test a new content + content previously developed for a long time, the job really becomes monotonous. Above all this, you have the changes coming to a feature working and tested earlier. Whenever there is a change, there is a possibility that the previously working content has a new bug, gets into the testing cycle, fixes and testing cycle.

The other point is that, as a human being, if you start seeing the same application for a long time, your eyes are tuned to your application. Your eyes are used to this. Every time a new build comes it looks the same.

In a product development environment, I am sorry to say this, the efficiency of the testing goes down over a period rather than improving.

When the frustration mounts up, test engineers make the wrong enemies, lose the respect of the wrong people, and cut themselves off from opportunities to address serious, chronic problems in software development.

Reference: XP, Iterative Development and Testing Community

I am not talking against test engineers here. I have discussed this with multiple test engineers and i understand their frustrations. The objective of this post is to talk about the issues with Manual Testing in an Iterative development.

Ok, you talked about the Problem. What is the solution for this?
Let us take a scenario where the there is a feature being developed in a particular iteration. When you demo the Product Owner realizes some changes, so you add them to your backlog. It again gets into whenever that particular change gets prioritized. Now you are working on some other requirement and realize that you need a change in a feature which we developed few months back… This cycle never ends (especially in a new product development).

If we are doing manual testing, the manual test cases also has to change whenever the requirements changes. What i realized is, as the product development moves forward in the development life cycle, these manual test cases go out of sync, irrespective of the effort you put in.

Ah!!! Yes, the kind of changes that is brought inside a new development, its very difficult for the test engineers to keep up the pace and update the test cases. Secondly, Development is dominated by developers. They talk to the Product Owners regularly and if there is a change, they update the feature accordingly. Thirdly, Test engineers keep seeing this for a very long time. No human being can have the motivation to change things 100 times (in a specific feature) over a period of 1 year. So the answer you get is, the requirements kept changing, so we thought its better to update the cases once you get a stable version. And then, it never gets updated.

Do we blame Test Engineers for this? No. As a human, its very difficult to have that kind of motivation to keep test cases in Sync every time.

Sounds Very Primitive. Huh. Check out any development projects which are been under development for the last 1 year or so. You will be very lucky, If you could find even one project where test cases are always in Sync.

In Summary, following are the challenges that needs to be addressed.
1. Keeping the documentation (Requirements, test cases etc…) up to date.
2. Communicating the changes to all members of the team.
3. Ensuring that the changes are tested completely.
4. Ensuring that the changes don’t break the existing features which are working.

If you have gone through every agile software development book, article, blog post they all talk about Automation. Ok, if everyone has mentioned Automation then why are we not automating.

We have started Automation of the user interface quite some time back. We have done extensive work using WaTiN (for Web), Microsoft UI Automation Library 2.0/3.0 and Windows Messaging systems for Windows applications and MSAA for Automating Office based Applications.

Ok, you have done Automation extensively. Then why are you even talking about this post?
First thing, the automation work which i have talked about are all done after the feature is completed or we get to a release status. Meaning, they are not done as part of the iterations.

Why? Because the common mindset is that the feature will change over a period of time. Even if i make it run now its going to change and i have to change my test cases again (Same as Manual test cases).

What is the solution for this problem? How do i make sure that my requirements, Acceptance criteria and Test Cases are always in Sync? Is it first of all possible.
Yes, its possible. The paradigm is moving towards Acceptance Test Driven Development.

What is Acceptance Testing?
Instead of focusing the testing on what every click in the application is doing, the focus is moved towards whether the system is behaving what the customer is expecting.
1. It defines how the product owner wants the application to behave.
2. It enables the developers to know that they’ve satisfied the requirements.
3. Helps us build the right software.
4. Can be run automatically by anyone at anytime.

Ok, Sounds good. But how does it solve the Problem of Synchronizing?
Specification by Example/Executable Specifications and Agile Acceptance Testing to our rescue.

In his book Bridging the communication Gap, Gojko Adzic says Agile Acceptance Testing revolves around the following five ideas:
1. Use real-world examples to build a shared understanding of the domain.
2. Select a set of these examples to be a specification and an acceptance test suite.
3. Automate verification of the acceptance tests.
4. Focus the software development effort on the acceptance tests.
5. Use the set of acceptance tests to facilitate discussion about future

Tools like Concordian or Storyteller or SLIM helps you capture requirements as tests. They are captured in a WIKI format using examples. Testers along with customers define the acceptance criteria for the tests. Developers write the test fixtures to make the tests run.

Acceptance Criteria to Tests definition.

Another advantage here is that, when customers define the tests, the tests are defined in a way how the end users will be using it rather than the system integration scenarios. These can be written in the language which everyone can understand.

How do they really work?
From Concordian’s Documentation
Specifications are written in simple HTML. Developers instrument the concrete examples in each specification with commands (e.g. “set”, “execute”, “assertEquals”) that allow the examples to be checked against a real-life system. The instrumentation is invisible to a browser, but is processed by a Java/.NET fixture class that accompanies the specification and acts as a buffer between the specification and the system under test.

These tests can become part of your build process and can run every night. If there is a change in the requirements, the tests will fail. Requirements needs to be updated with examples , there will be a change in the acceptance criteria and tests need to be updated. In this way, the requirements and test cases will always be in Sync.

Acceptance Testing Frameworks enables customers, testers, and programmers to learn what their software should do, and to automatically compare that to what it actually does do. It compares customers’ expectations to actual results. When we actually work on something like this, the documentation will be always live.

Gojko Adzic, in his post Anatomy of good acceptance test says, in order to be effective as live specification, acceptance tests have to be written in a way that enables others to pick them up months or even years later and easily understand what they do, why they are there and what they describe. Here are some simple heuristics that will help you measure and improve your tests to make them better as a live specification.

The five most important things to think about are:
1. It needs to be self explanatory
2. It needs to be focused
3. It needs to be a specification, not a script
4. It needs to be in domain language
5. It needs to be about business functionality, not about software design

Following a practice like the above, helps us always complete what is expected. It also leads us to a Done stage every iteration. The documentation is always live and one can understand the functionality even after 6 months or 1 year.

It allows the testers to collaborate more with the developers and they can actually move the dull work out of their plates. It also will eliminate the frustration from the test engineers about requirements not captured or changes not been communicated.

In an ideal world, you need to complete the features developed and get to a done stage in 2 weeks (or within the sprint length). In a realistic world, may be a 1 week stabilization for a 2 weeks iteration is not a bad idea. Finally, i do not want things to spill over to further iterations and not want to call them as done.

If you are doing a Framework development, the challenge how do i do this for a Framework? In my opinion, you are developing this framework to solve a problem with a domain/vertical. Build your requirements/acceptance criteria/cases for the specific vertical than for the framework. Why? A Framework alone doesn’t provide any business value. Implementing the framework for a vertical is what brings business and that’s where we should have our focus.

In case, if we do not do this, can we still be successful? Can you expect a high level of testing efficiency after an year or so? By God’s Grace, it is possible. But there is no guarantee. If you are a manager and want to provide any predictability with the product quality towards the end of cycle, then you would definitely need to implement Automated Acceptance Testing in your projects. Else, be prepared to hear the word DejaVu from your manager 🙂

Happy Reading!!!

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s