Web performance management: Minimising risk for new releases – when partnerships really pay off

Date: 20th May 2014
Author: Deri Jones

Had an interesting project meeting today for a major UK site that has been refreshed and launched this week – their feedback provided some valuable lessons on how to manage web performance through such a big, brand-risking, mobile-centric change. As one of the UK’s top 100 sites, there has been a long and carefully planned project to make substantial changes including:

  • bring in house and host in the cloud part of the back-end system that was previously with a 3rd party
  • refresh the front-end with a modern Responsive Web Design approach – the site has a high percentage of mobile traffic and core to the update was to unify delivery across all platforms.

The project became very hectic towards the end: 2 weeks to launch

  • 5 days planned to load test the final build against a complex matrix of user journeys
  •  load testing showed performance was still not meeting the traffic throughput targets, and a significant percentage of content errors were found

1 week to launch

  • short notice, extra load tests called in: Monday and then Tuesday
  • results still not adequate, rethink in the amount of cloud capacity needed
  • dev teams and devops teams working extended shifts
  • extra load-testing called in – to run 24/7 from Wednesday 9am through to Thursday 6pm!
  • final tweaks undertaken and the last load test on Friday proved successful.

Lessons learned: 1) It’s a truism but testing earlier is always good.  Performance testing is one of the non-functional-testing areas that tends to get left to the last minute, whereas functional testing happens much earlier. Next time – a small scale mock-up of the full system would have allowed performance testing to have begun four weeks earlier, and helped identify some unexpected ‘gotchas’.

2) Change fewer things at once. The extra complexity that arose from changing both front-end and back-end caused some last minute worries – in particular, the use of a new back-end data delivery tool had some unexpected corner case errors that couldn’t be spotted until the full spread of realistic user journeys was undertaken.

3) Resource planning. If you are load testing with the same guys that are building your site, be aware that just when you need more load test help is exactly when the existing team is already flat out … a double whammy. It’s much better to have a partnership arrangement, where you can call on load testing expertise at short notice. Ideally this will be with specialists who already know your site/technology and they will have already written realistic load journeys – so there is no time lost getting them up to speed.

4) Know the evidence that your load testing is based on – Realism. Pressure mounts when a big project looks like it may not meet promised delivery dates.  And if load testing results are preventing a release going live, there will be many voices from board level down asking ‘are you sure the load testing is accurate – maybe the results are not true – what are they based on?’ You need confidence that the load test results are true and actionable – any assumptions made in the test plan could come back to bite you. If you used non-realistic journeys because realistic ones with random choices were too hard to build then that assumption and un-realistic data could be your Achilles heel.