Whitepaper - Testing Automation in Context - Belatrix Software

Campaigns:

Targets:

Page: No Home Page: false
Page: Home Page: false
Test Parameter: test: false
Test Parameter 1: test=test_1: false
Test Parameter 2: test=test_2: false
Page: Software QA and Testing - softqaout: false
Test Parameter 3: test=test_3: false
Test Parameter 4: test=test_4: false
Test Parameter 5: test=test_5: false
Page: Featured Clients - featured-clients: false
Source: Google CPC: false
Page: Adobe CQ - adobe-cq-development-outsourcing: false
Page: Contact Page - contact: false
Page: NET Outsourcing - net-outsourcing-software-development: false
Page: Mobile Solutions - mobile-solutions-development: false
Page: Java - java-development-outsourcing: false
Page: Desarrollo a medida - isu-empresa-necesita-desarrollo: false
Page: Adobe CQ From AdWords - adobe-cq-development-outsourci: false
Page: Software Architecture - software-architecture: false
Source: Google CPC - ,NET: false
Source: Google CPC - Mobile: false
Source: Google CPC - Software Architecture: false
Source: Google CPC - Java: false
Source: Google CPC - Software QA: false
Source: Google: false
Page: Adobe CQ From Google - adobe-cq-development-outsourci: false
WhitePaper or Success Story: true
TEST Source: Google CPC - Software QA: false
TEST Source: Google CPC - Software Software Development: false
Page: Whitepaper: Cross Platform Development: false
Page: PHP Software Development: false
Page: Financial QA Services: false
Page: UX Services: false

Properties:

previousUrl:
referrerUrl:
referrerKeywords:
currentUrl: index.php/whitepaper-testing-automation-in-context
currentQuery:

Whitepaper - Testing Automation in Context - Belatrix Software

Whitepaper - Testing Automation in Context

Download Whitepaper: Agile Software Testing

Introduction

Automation promises to find more defects with less effort. However, it isn't a substitute for the human capacity to design and execute a well thought out, high quality process.

Whitepaper: Testing Automation in Context

The Myth:

Automation is suddenly in high demand. It's a must for every new project. The mystique of automation is so strong that it even seems that you no longer have to write test cases, run manual testing, or do impact analyses.

Automation promises to find more defects with less effort.

Reality Check:

Automation can improve quality activities and lead to higher testing efficiency. However, it isn’t a substitute for the human capacity to design and execute a well thought out, high quality process. This paper will discuss the most important aspects of automation, and help you determine when and how to use it effectively. The core message is that you should automate gradually, only after you have a solid test case repository with all of the possible scenarios discovered, and a well-defined QC and QA process with all testing objectives identified. Tasks should be estimated and progress objectives well defined before moving ahead with an Automation process.

The Facts: Determining When to Use Test Automation

The key questions to ask before determining whether or not to pursue an automation strategy are:

  • How effective are automated tests at finding defects? A bunch of automated test cases that always pass will not be of any benefit.
  • What’s the human effort required to create and maintain the automated test cases? Depending on the strategy used, very often maintaining an automated test suite requires the same, or more, effort compared with running the tests manually. In some instances, the time you save in the execution effort is less than the additional time you use to keep the automated tests running properly.
  • How do you calculate the true costs of automation? Ideally, the sum of the individual costs to automate should be less than the cost of performing manual testing. Some of the costs that should be factored in include:
    • Research necessary to build or buy the automation solution.
    • Obtaining or constructing an automation solution / framework / tool.
    • Learning curve required by the automation tool.
    • Maintaining the Automation framework.
    • Automation test cases / scenarios maintenance. To cover all the implicit and explicit validations on a functional test by automating it, usually requires more code and effort than initially estimated.
    • Any change to the system will have the same impact, against the different modules of the system, as it does against the automated tests.
    • Test failure verification. When an automated test fails, you first need to verify whether it was a real failure or whether it was a failure of the test itself.
    • Test success validation. When an automated test passes, always consider its effectiveness at being able to accurately identify defects.
    • Test data. Estimate time required for data sets or tests contexts creation and maintenance.
  • What kinds of tasks are good candidates for automation? Simple and repetitive tasks are good candidates to automate, as well as those that include large volumes of parametric test cases and/or large volumes of outputs to analyze. In those cases automation not only reduces the effort, but improves accuracy as well.
  • How do you determine what to prioritize? Automation should not be prioritized over requirements validation and manual test cases maintenance. Automation is a plus but never the most important or urgent quality task. Faster feedback should always be the priority.

Best Practices in Implementing a Test Automation Approach

Assuming you address all of the previous questions and determine that Automation is the best course of action, what’s next? The next step is to ensure that the process to implement Test Automation is effective. This can be done by following these guidelines:

  • Validate manually first, then document the identified test cases, and then automate. Automate first, then code and later run the automated tests it is hardly a reality on UI functional testing. When using automation, it is hard to get the same coverage you obtain with manual testing. The likelihood of having a UI automated test case written before having the feature implemented, without needing to modify that test case to make it work properly afterwards, is really low. Providing rapid feedback to the developers to allow them to fix any problem without compromising the iteration objective is therefore the priority. You should therefore first validate manually, then document the identified test cases, and then automate. If you insist in trying to automate first, test second and then report the result and you don't complete the tasks, you'll end up having features not only without working automated tests but also without testing.
  • Treat your automation project as you would a development one. Practice all configuration management best practices same as you would use for development.
  • Design your test automation framework to be easy to debug. Consider what you'll be doing with it most of the time and remember that one of the purposes of the framework is to be helpful. Make it simple and easy to maintain. Don't try to look smart with your code.
  • Don't tie your automation solution to a specific framework or tool. Try to build a solution that allows you to mix different tools, libraries, frameworks, etc. Usually, this is only possible using a programming language.
  • Use the same programming language. When developing the automation, use the same language for the automation, as the code under test. If for example a system is coded in C#, it would make little sense to use Ruby to automate the tests. Ruby is fantastic. However, the cost and implications of the product owner having to find programmers with different languages will outweigh the benefit of using Ruby.
  • Design automation solution to be easily run by developers on their local environments. They then can detect a defect before pushing the change into the source code versioning repository.
  • Use of BDD (Behavior driven development) frameworks like StoryQ improves the communication of the tests validations and its results, and can even associate more clearly test acceptance criteria and requirements.
  • Practice continuous testing. Remember that feedback velocity is king. In order to reduce the time gap between the developers delivery of a feature and test results going back to them, it is important to discover that a change was introduced into the system. Then execute the tests automatically to get the results ASAP. This is possible using a continuous integration server.
  • Hit the system with automated tests on as many abstraction levels as possible since it results in better cost – benefits relationship. But keep in mind that the sooner you find a defect, the less costly it will be to fix it. Even, in the event, that you get feedback about a defect after it has already been detected in a higher abstraction level, that information will enable you to get closer to the problem.
  • Checking the log of an exception logger (that catches any exception) after each action against the system is a good practice for identifying defects and matching them with high level system actions.
  • Wrap your test with a unit testing framework will help you to get the most from the continuous integration servers. Frequently, you'll have to exceed what a unit test framework offers you. It may be just because the unit test frameworks were designed to code unit tests. Depending on the abstraction level you hit with your test, you may want to perform several validations within the same test, because reaching a data context or a specific screen of the UI may require a lot of steps. When a test fails, it's a must to know the scenarios and each one of the validations that failed. Unit test frameworks are designed to abort and consider the entire test as failed once one of its validations fails. In these kinds of situations, code using out-of-the-box thinking, don't force your tests to fit within the unit test framework rules.
  • Apply DRY (Don't Repeat Yourself) concept as much as possible on the test steps. If you have 20 test cases that use the same first 15 steps, then abstract those steps somewhere and give it a name so you can have the definition of those 15 steps in a single place and you can execute those steps in each of the 20 test cases just invoking an action/name. That group of steps should not have validations, the purpose of abstracting those steps is to make easier for the maintenance of the test cases and to reuse a known group of actions. One good example is the Redirection actions. Those actions or groups of actions that take you to a specific place in the UI no matter where you were and are shared by all the test cases. Don't use any pattern, approach or methodology if you realize that maintaining that pattern is expensive, and causes more headache than benefits. Don't be religious about patterns, methodologies, etc. Use whatever you find more efficient.
  • The creation of test harness / scaffolds are part of the automation activity. To automate and maintain it requires not only tests or validations, but also deployment scripts, database creation and data insertion scripts, etc.
  • Low down to zero the functional and data test cases interdependency. Ensure that every test case has its own and exclusive test data-set. By doing this you will have the chance to hard-code the data to be used in the test case steps and validations, so the maintenance will be lower. You will also avoid waterfall failures, because the test cases don't depend on the data produced by each other.
  • On each build exercise, install the application from scratch. Clear the system files, download them from the repository, build them and deploy them again. Delete the database and run the database creation scripts again, including the data insertion scripts.
  • Distinguish between Sanity Testing and Smoke Testing and apply both.
    • Smoke Testing / build verification test: Run test that can tell you if you can use the application for further testing.
      • For example:
      • Check if the build was successful.
      • Check if the web page loads (in the case of a web system) without exceptions.
      • Validate if you can log in.
    • Sanity Testing / tester acceptance testing: Run tests at each hierarchy level or group of features to know if it worths to run deeper testing on each module or group of features. Consider it as a suspension criteria of each group of tests.
      • For example:
      • Level 1: If you have a section named "reports" on your web UI and you cannot reach it, then it makes no sense trying to execute the tests related with reports.
      • Level 2: If you cannot get a report, I makes no sense trying to export it to PDF.
  • Distinguish between "failed" and "not ran" test. In the case of "not ran" its sanity test failed so the logic decided to not run it.
  • Design your automation solution to allow you to run any test case, test case scenario, run strategy test sets, or any combination of those.

Conclusion

Automation can improve quality activities and lead to higher testing efficiency. It’s important though that it be used in a well thought out way. That can lead to better overall results, productivity benefits, and cost efficiencies.