Today we have greater expectations of the software we use than ever before. This is the number one reason why performance testing has become so important. Research suggests that just a 1 second delay in page load time results in 7% fewer conversions, 11% fewer page views, and a 16% decrease in customer satisfaction. And this translates to real dollars – if your site earns $100,000 per day, you’re losing $2.5 million every year due to this 1 second.
In this blog I want to give a brief overview of performance testing, and some of the factors to consider when conducting it. To start with, it’s worth noting that performance testing is a non-functional test, which examines the relationship between the application and its environment. It includes a range of different types of test, from load tests to stress tests, soak tests, each of which have different objectives. It is an essential part of the development lifecycle because it is required for each new release, in order to see if new changes or features impact (positively or negatively) the performance of the application.
“Good” performance is a matter of perception and depends on the purpose of the software. It may be acceptable for your business for example, if an internal file transfer system takes a few seconds to work, but your customer facing website needs to offer customers a seamless and lightening-fast experience. Similarly, the speed and stability of a mobile application will be key factors as to whether customers will actually use the app, or if they will uninstall it at the first opportunity they have.
It’s important to start thinking about what constitutes acceptable performance at the start of your project, rather than waiting until it is ready to release. By building in considerations of performance during the early stages of product development, you improve the likelihood of your application meeting or exceeding expectations when it is live.
Ultimately performance testing is about “humanizing the robot”. We need to simulate different uses of the application. At the core of this, you’ll find scripting. Scripting can emulate what humans are doing with the application and how they are interacting with it. However, not every test has to be made with scripting. For example, it is also important to monitor the environment, and export and correlate the results – that way we can understand what is happening on the server-side.
For scripting there are many tools available in the market to design and run scripts. Many of these tools are under license, but there are also some open source options such as Apache JMeter which can be used for load testing both static and dynamic resources. I recommend evaluating different tools and choose the most appropriate for your needs – your choice will depend on different factors, from the communication protocol your application is using, to how many concurrent users you want to simulate.
There are also tools available to monitor and analyze the performance of your servers. These tools can monitor different aspects such as operating system consumption or memory usage. However, these tools can be expensive, and many will charge by the number of servers or throughput generated, or time.
Although performance testing has many benefits, I still recommend evaluating the cost-benefits of conducting it. There is a balance between the cost of conducting extensive performance testing and improving performance (as mentioned earlier, many tools are expensive), versus the expected business gains of that improved performance. I recommend considering the following:
To find out more on this topic, I recommend listening to a recording of my webinar which I gave last week, and where I go into more detail about the different types of tests that make up performance testing.
October 14 / 2019
October 10 / 2019
October 03 / 2019
October 10 / 2019
I’m delighted to announce that yesterday, Belatrix’s Customer Success Manager, Silvana Gaia was nominated as “Role Model of the Year” in the Women in IT Awards held in Silicon...Read post