INDUSTRY

Accounting Consultancy

PROJECT

Improve the efficiency and process of existing in-house performance test team.

SYNOPSIS

Testing Performance / Fimatix - Performance Testing Case Study

THE BACKGROUND

The customer is a large global accounting firm, which at the time was progressing towards centralisation of applications into one of three global datacentres. While the organisation had over 9000 applications, a core of around 200 applications required performance testing on a regular basis.

These applications were for the most part used internally with a handful of requirements for performance testing for the organisations customers. The customers’ projects were fairly autonomous with resource fully dedicated to a single individual project. Some functions were centralised, performance testing was one of these.

Performance testing resources were allocated on an ‘as needed’ basis supplying services often to a number of projects to a maximum of 40 hours per week. The performance team itself initially consisted of five leads who defined, planned and managed performance testing and a bench of 12 scripters who developed performance testing assets, executed tests and produced reports.

THE REQUIREMENT

The in-house performance team was seen to be failing and had a poor reputation within the organisation. This included;

•  Performance test projects were often over-running in terms of time allocated and budget

•  Performance issues were occurring in production on a regular basis, even though performance testing had completed successfully

•  Results and deliverables were vague, boiler plate, often inaccurate and lacking evidence

•  The management team was unclear about what was going on and suspected that staff incompetence was to blame

•  The performance test tool was no longer supporting the organisations needs. A new tool had been selected but help was needed to understand requirements and implement.

THE FINDINGS

Testing Performance was brought in to investigate. This started with a single Technical Consultant, but quickly increased to two Technical Consultants, to perform the initial Discovery into what was going wrong with the in-house performance testing.

One of our Testing Performance consultants was tasked with working alongside the performance test leads on three in-flight performance testing projects. This enabled a hands on view of what was actually happening as well as the opportunity to make improvements into performance testing outcomes straight away.
The second Testing Performance consultant was asked to shadow an incumbent test lead across thirteen projects.

Key findings from the 'Discovery' phase were:

•  The current performance testing tool was causing significant overheads in delivery as performance test scripts had to be fully re-recorded every time a new performance test was required. This meant that scripts were never able to mature and too much time was spent on performance test build.

•  The current performance testing tool produced high-level results only, not allowing the performance tester to properly drill down into metrics, also not allowing for correlation of user performance to back-end server performance.

•  Too much reliance was placed on scripters to generate reports as they lacked experience to properly identify and call out performance issues.

•  The performance testing delivered often varied from the test plan with those changes not being captured anywhere.

•  The performance test leads were managing too many projects, which did not allow them sufficient time to exercise their expertise - especially around performance analysis and reporting.

•  Evidence from performance tests was of a low quality and insufficient to help diagnose why performance testing had not picked up on performance problems that subsequently occurred in production.

•  Performance test planning was not correctly understanding and assessing risk. Too many functions were being automated and included in performance testing. Some applications were being performance tested even though performance risk was low.

•  A key member of staff was un-cooperative and was released. All other staff were found to be competent, enthusiastic and able to work as a team.

THE SOLUTION

A Testing Performance consultant picked up and managed projects previously run by the released performance test lead, delivering these successfully. We implemented the following changes:

•  Process documentation was updated.

•  An estimation spreadsheet was produced to standardise and quickly estimate the cost and duration of a performance test.

•  Questionnaires were introduced to help aid discovery of performance testing requirements.

•  An ‘Exception’ to performance test template was built that would allow for the reasons why an application is not being performance tested to be documented.

•  Templates for the performance test plan were updated with more meaningful information included, especially around scope, objectives and approach.

•  The performance test report template was extensively updated to allow for the inclusion of evidence to underpin findings.

•  The design and implementation of the performance test tool was led by Testing Performance over a six-month period as tool components were installed across three global data centres. The existing team were cross trained to use the new tool with use of the tool matching the newly implemented performance testing processes.

•  The performance team was mandated to capture backend server metrics during performance test execution and not to leave it to the project team. Time was allocated into the performance testing schedule to allow for a sufficient amount of analysis to take place.

•  The number of projects that the performance test leads would support was reduced from up to 15 projects to a maximum of eight allowing for skill and expertise of the performance lead to become more influential.

•  Existing team members were encouraged and enabled to allow their technical capabilities to shine.

THE OUTCOME

Over time, the performance team became more accountable - often delivering to both time and budget.

More in-depth performance analysis allowed for better collaboration with application, system and hardware specialists and a better overall understanding of application performance prior to release into production.

Senior management was now satisfied as poor performance decreased and application stability increased. Where problems did occur, data was available to support investigation and to feedback as to why a problem was not found during testing, leading to an understanding of how to further improve performance testing.

Deeper more meaningful relationships with key projects were developed and grown as trust was re-established.

As success became measurable, confidence in the performance testing function grew. Resources to the performance team increased with Testing Performance supporting the customer for a further 4 years in over 150 concurrent projects. We contributed seven performance testing resources out of the team that eventually grew to thirty.

THE EXAMPLE DEMONSTRATES

•  Testing Performance’s ability to understand and adapt to difficult and complex situations, quietly yet efficiently working with the customer to overcome problems in a highly politically charged environment.

•  Our ability to understand requirements around tooling and deliver architectural solutions to test tool deployment and use.

•  Our ability to work within existing teams, using training and mentoring to achieve better results for the team as a whole.

•  Manage multiple performance testing projects, assigning and deploying resources, managing deliverables whilst getting involved technically particularly around performance diagnosis and tuning.

•  Providing skilled resource and being able to scale up or down as requirements change.

•  The ability to assess and update procedures, policies and approaches backed up with supporting documentation.

•  Expertise around performance testing, being able to work and communicate with projects and deliver risk-based testing.