Date: 1st December 2010
It’s been a very busy time here with a variety of interesting website load testing projects, capacity planning and website performance activity, with some helpful tips coming out of it.
A good percentage of what we do is focused on online Retailers, and maximising their ability to handle the sales peaks in the Christmas online sales rush, means that up to November we saw some interesting load testing projects.
Helping folks get ready for planning their 2011 activity has slowed during the rush to be prepared for Christmas.
Including load testing some of the biggest names in not just retail but Travel too.
Whilst very few eCommerce teams would be willing to say that they just don’t do and don’t need load testing – there is a quite a variety in how website load tests are planned and rolled out, from simplistic Apache bench scripts, through to something a little more neo. Load test wise, anything is arguably better than none on your web site, confidence in the value and meaning of what is being done is however paramount. Will it give actionable information?, and will it provide metrics of user journey capacity that make sense to the Business Teams, so they can make informed decisions about where to be spending money to add capacity to what user functionality.
Both ends of the spectrum were apparent last month: on one hand, those that actively worked with us to document the Test plan in advance: and then follow it through day by day: versus one keynote client who decided just hours before the first overnight testing started that they wanted to test a quite different way, and threw away the test plan! Their argument was “let’s test at overload to start with, with all Journeys: let’s create some smoke! and we can then work back from that”.
The lesson to take away – is handling the complexities of trying to plan a website load test that will reproduce an exact mix of real world traffic, when the data of real traffic peaks from the past may not be readily available from web analytics.
So two approaches are feasible:
- test individual Journeys one by one, and finally test the mix of all Journeys together
- Test from the beginning with all Journeys together.
The latter has merit – it is after all trying to follow the mantra that underpins all of the website monitoring and load testing work we do: which is to ‘do what the real users do’.
Though it is not always easy to achieve, if the tech team have ideas as to what level of load the CPU/RAM/Network etc should have to ‘be like it was last peak hour’ – but have less detail as to what the exact traffic pattern was that caused that load level.
But the first approach also has advantages: by testing the capacity of each multi-page User Journey in isolation, you quickly find the bottlenecks in your sub-systems behind the website -and quickly see interesting errors being thrown off, which help the engineers dig deep into root causes. Good capacity planning practise.
Either way that you plan it – most important of all is to define meaningful dynamic user Journeys – that are not just a list of static URLs being followed one after the other – but at each page look into the page, and pull from that dynamically a product or a link. So that the load test can for example not put the same product into the basket for every virtual user, but follow a path with random searches and choices so that each virtual user is buying a different product – much more realistic.
And vital if you’re selling items that will go out of stock during a load a test -e.g. hotel rooms against dates: otherwise the load storm of the 10,000 or 100,000’s of virtual users will throw errors, simply because it is hitting static UR links to hotel deals sold out during the earlier test traffic. Dynamic Journeys that always check in the live page served provide much more realism than a browser mob load test of pre-defined pages.
It’ll also make for more effective capacity planning if you’re an ITIL organisation.