Real Cases of Performance Testing Failures
You’d be surprised just how often companies, big ones included, compromise software testing in the rush to roll out a new product or feature. Many either skip testing entirely or rush through it without taking the time to thoroughly test the performance and load-bearing capacity of the product. The result? Software solutions that could have been great are abandoned due to crashes or critical downtime.
Here are just a few examples of what can happen when QA performance testing is ignored or not fully implemented.
Major payment delays
In the UK, major banks like Barclays, Santander, RBS, and Natwest experienced severe downtime due to a traffic spike. Customers couldn’t access their accounts, pay bills, or transfer money for a few hours. What went wrong? The systems simply couldn’t handle the surge in user activity, which happened on a payday.
Obviously, these banks are still functioning, but the impact from performance failure was quite big. Besides thousands of disgruntled customers who shared their complaints on social media, some banks were fined by FCA.
Glitch in digital banking
Another noteworthy case involved Lloyd internet banking. A minor glitch in the system caused over 3,000 ATM machines to go offline, leaving countless customers unable to use their cards or withdraw cash for over three hours. They later admitted that this glitch was due to a server failure, which shouldn’t have happened as there had been no major build or update to the system.
These are just two examples of performance failures in the banking sector, but similar issues can be found in many industries. Take HealthCare.gov, for example – another big player with a huge customer base. Over the last few years, customers have complained about the functionality of their website a lot of times, blaming it for constant lags and glitches.
It’s good they don’t deal with prescriptions, but if they did, the consequences would be much worse than just complaining. All this just goes to show how important it is to test systems and do it rigorously. A minor bug can be the start to reputational damage, loss of customer trust, fines, and even legal procedures.
How Does Performance Testing Differ from Performance Engineering
Performance engineering and performance testing are two terms that are often used interchangeably, but are they the same thing? No, they are not. Not only do these methods differ in their purpose and scope but they are also different in terms of activities they cover. Let’s take a look at the key differences between them.
Scope
Performance testing focuses on identifying issues in a software solution by running tests in a controlled environment. The goal is to assess specific performance metrics such as response time, throughput, and stability under various load conditions.
Performance engineering, on the other hand, takes a broader and more proactive approach. It’s a continuous process integrated throughout the SDLC, which, in addition to testing, also aims to design, develop, and optimize software to meet performance goals from the ground up. It considers architecture, infrastructure, and design to ensure performance is built into the product, not just tested afterward.
Goals
The primary goal of performance testing is to detect any performance issues that could impact the app’s behavior and fix them before release. When it comes to performance engineering, its goal goes beyond just finding problems with the product’s performance. First and foremost, it’s about ensuring that these problems don’t even happen by designing and developing software that can handle load and scale.
Timing
There’s also a difference in timing. While performance testing is usually conducted closer to the end of development, or just before production, performance engineering starts early and doesn’t end until the product is launched. From planning and designing through developing, testing, and monitoring, it’s running like a red threat through every stage of the SDLC.
Examples
Let’s take an eCommerce website as an example. In this case, performance testing would be focused on running load tests to ensure the app can handle a flash sale with 10,000 or more concurrent users.
With regard to performance engineering, its focus would be placed on designing the eCommerce website’s architecture, using caching mechanisms, load balancers, and optimizing database queries, to ensure it can handle traffic spikes and work under load without crashes or slowdowns.
Performance Testing vs Continuous Performance Testing: How They Compare
Another type of testing that many mix up with performance testing is continuous performance testing. Let’s see how they compare to each other to understand the difference between the two.
Scope
Performance testing focuses on specific scenarios (load, stress, spike tests), whereas the focus of continuous performance testing is much broader. As the name suggests, this methodology extends throughout the entire software development life cycle, continuously monitoring product performance as any changes are made.
Goals
The purpose of performance testing is to evaluate how the app behaves under various conditions, such as heavy loads or stress, at specific points in the development cycle. It seeks to identify bottlenecks, stability issues, or performance degradation right before release. Continuous performance testing approach, on the other hand, is focused on ensuring that performance issues are continuously monitored and addressed throughout the development lifecycle. The goal is to prevent issues from creeping in as new features or updates are introduced.
Timing
Unlike conventional performance testing, which is conducted at specific intervals, continuous performance testing is carried out throughout the entire SDLC, from early stages where developers only start writing code to the moment of launching a final version of the product.
Examples
As an example, let’s take the same eCommerce website. With product performance testing, you’d be simulating a situation in which a website would have to deal with heavy traffic, such as, for example, during Black Friday event when thousands of users shop online. In contrast, continuous performance testing will allow you to check how your site responds to every single change made to the code, making it possible to predict whether it will be able to handle flash sales and whether it’s not going to degrade over time.
Here’s a detailed breakdown of the key distinctions between these three methodologies.
Aspect | Performance testing | Performance engineering | Continuous performance testing |
Scope | Focuses on evaluating the system’s behavior under specific conditions (e.g., load, stress, spike). | Covers the entire software lifecycle, aiming to design, develop, and optimize for performance from the ground up. | Monitors performance throughout the development lifecycle, continuously assessing and ensuring stability. |
Goal | Identify performance issues such as slow response times, bottlenecks, or crashes just before deployment or release. | Ensure performance is built into the system architecture, focusing on preventing issues before they arise. | Continuously identify and resolve performance degradations or regressions during every phase of development. |
Timing | Conducted near the end of the development process, usually right before production or after major updates. | Integrated from the very beginning of the SDLC and continues through to production. | Integrated throughout the entire development lifecycle, from early coding to production, as part of CI/CD pipelines. |
Frequency | Occurs at specific stages (e.g., before product release or after major updates) or on-demand before high-traffic events. | Ongoing, as performance is optimized and built into the design from day one. | Continuous, with performance being assessed after every code change or deployment to ensure no regression. |
Who conducts | Typically conducted by a dedicated team of QA engineers. | Conducted by performance engineers or architects who are involved throughout the SDLC. | Conducted by development teams, DevOps engineers, or Site Reliability Engineers |
Key metrics | Response time, throughout, resource usage, and stability under load. | System scalability, optimization of architecture and infrastructure, resource efficiency, and maintainability. | Real-time performance metrics like response times, latency, throughput, error rates, and overall system stability post-deployment. |
Benefits of Performance Testing
Performance testing is an essential part of the testing cycle, ensuring the smooth functioning of the software. If done right, it can bring a number of significant benefits. Let’s go over some of the key advantages of performance testing to understand why you should focus on it, and how it can help your business grow.
- Enhanced user experience. Users expect fast and responsive software, and performance testing helps deliver on these expectations. Faster load times, reduced downtime, and a seamless user experience lead to higher satisfaction, which can increase customer loyalty and reduce churn.
- More conversions. When users love the app, they are more likely to engage with it more often, leading to increased conversions. Whether it’s making a purchase in an eCommerce app or using a service, a smooth performance directly impacts user engagement.
- Better scalability. As your business grows, so do the demands on your software. Performance testing ensures that your app can scale easily by assessing how well it handles increased traffic and data volumes. This allows you to plan for future growth without worrying about performance bottlenecks or system instability.
- Lower maintenance costs. While testing might seem an expensive overhead, it ultimately saves money in the long run. Performance testing, in particular, helps catch and fix performance-related issues early on. That way, if any issues arise, they can be addressed during the development phase rather than waiting until production, where the cost of fixing bugs or downtimes jumps several times.
- Improved competitive edge. These days, having an app isn’t enough to maintain a competitive edge. In order to stay afloat and see your business thrive, it’s vital that the app can not only retain users but also attract new ones. The best way to do this is to offer people a well-performing app that runs smoothly in different environments.
Types of Performance Testing
There are different types of performance testing such as load, stress, endurance, spike, volume, configuration, and scalability testing. While load testing and stress testing seem to be the most common ones, each and every type of performance testing sets to discover and eliminate different specific performance-related issues.
Load testing
Let’s start with the simplest yet most common type of performance testing called load testing. In load testing, your main goal is to analyze the behavior of your website or application under different load conditions. Be it simulating a load of transactions expected within a set time period or a specific number of users trying to access your software solution at the same time, this type of performance testing will help you measure the speed and capacity limits of your system under various expected load conditions.
Let’s assume we’re a mid-sized company running an eCommerce website. With around 500 concurrent users on an average day, our online store is doing pretty good in terms of performance, responding to user actions such as browsing through categories, loading product pages, and adding items to cart within an acceptable response time of 1-3 seconds. And since there are no page errors, our customers are happy with their shopping.
But come a holiday such as Christmas with a load of 5,000 concurrent users, and all of a sudden our customers are faced with discouraging experiences of sluggish response times, page crashes, and failed checkouts. Why? Because we either completely forgot, or didn’t even care to conduct load testing. We didn’t have a clue if our system could effectively handle this many concurrent users and simultaneous transactions. We didn’t prepare our website for holiday traffic and faced considerable losses in revenue and customer loyalty as a result.
The purpose of load testing is to prevent situations such as in the example above. It enables you to identify performance bottlenecks of your system and the hardware running it to help you prepare the necessary capacity for expected loads well in advance. It gives you a clear understanding of:
- the average response time of each transaction in your system;
- the peak number of users your system can effectively handle within set performance parameters;
- bottlenecks such as CPU, memory, and network utilization in your system;
- whether the existing hardware and server configuration deliver on performance requirements.