Date: 15th December 2011
The team here have been very busy the last quarter, as always, a lot of retailers want to load test their websites in advance of the seasonal shopping rush, and this year it seems that the benefit of this preparation is being recognised more widely than ever.
I’ve been very impressed with the team here, how they’ve been able to juggle such a busy period whilst at the same continue development of our testing systems, to allow us to model even better realism for our multi-channel retail projects.
There’s been some fascinating graphics of realism too, as we’ve evolved our approaches.
Firstly, our continual development of our dynamic User Journey based approach has not only benefited 24/7 monitoring of ‘Do what the Customer Does’ dynamic routes through a site: but has provided the same realism gain to our load testing too.
Just last week we helped nail down a problem that was losing sales – doing a search in the usual top right search box on this retailer site, would produce an error response for certain keywords chosen – words that whilst not in the top 10 search terms on their site, were words directly related to the products being sold. It’s only because our dynamic, randomised approach was able to try so many combinations that some of the slightly ‘ under the water line’ holes were spotted. And it helped that our technology is flexible enough to allow us to run Journeys against their website and their in-store kiosks too; that provides better realism of site conversion across all channels.
But the brainy engineers here just recently have been looking at ways to generate website Load tests with even more realistic spread of virtual users.
The work was triggered by the fact that more and more of our clients are using the Cloud to host parts of their online store, and we’re seeing a big jump up in the peak customer capacity of some clients, the ones who get their software properly Cloud ready. Which means we are adding to our own load testing infrastructure: we run both dedicated and cloud based test servers.
So, controlling more and more servers during a load test, aiming for higher traffic peaks and handling the complexity of cloud servers variable capacity has thrown out some intriguing challenges.
One of the main strengths of the cloud model is also the cause of one of the major differences involved with testing on a a cloud based system, and that is that your servers are not just “your servers” anymore. While there are many benefits to this what it means in testing and performance terms is that they cannot be relied upon to give you exactly the same amount of power and performance in each instance and will sometimes not be able to consistently deliver the amount of “oomph”required. However, this is exactly the realism you need, as this is exactly the user experience problem that can occur on the live site.
Once you then throw in some Think Time realism per page we are starting to see load tests that need substantial ramp-up times simply to ensure a sensible mix of users across all stages of activity – to ensure realism. You have to work harder to avoid ‘bunching’: where at any one moment of time too many virtual users are active in one task or page, and not enough are active in other areas.
Imagine starting a 100,000 virtual user load test made up of a number of User Journeys that, although following dynamic randomised routes, all start at the same place, the home page. Without ramp time cleverness your first couple of seconds would be 100,000 home page requests and nothing else! And then for the next period there’d lots of activity all over the site but absolutely zero on the home page, until the Journeys start to finish and new virtual journeys start the process again at the homepage !
So ramp-up is vital to get a realistic load on your site, to spread users across the journeys in use.
But… ramp-up time is wasted time as far as measurement of your sites ability to handle seasonal traffic peaks is concerned because for at least two thirds of the ramp up, your online store is not breaking out into a sweat. Everything is smooth, no errors are thrown, no pages slow down, and all the lovely graphs of server utilisation show nothing much happening. Wasted time for all the engineers on duty.
As most organisations want their load testing out of hours, to avoid impacting real users, there is only a limited time window over night to get in as much measurement and as much evidence of things needing to be fixed as possible.
Some nights, we reckoned we were spending 20% of the time, thumb twiddling in ramp-ups.
So, our clever team have been experimenting with ways to shorten the Ramp up times, but maintain and even extend the realism of load testing, the user spread.
The problem of lost time during a traditional extended ramp-up – is shown clearly in this graph: the ramp-up of virtual users in pale blue is clear, with the outcome that it’s nearly 300 seconds before results start to come in (the green line) – between 200 and 400 on the X-axis before all the bases are filled and Journeys start to finish .
With lots of think time at each step of the Journeys making up this test it added up to about 4 minutes before the first starts, got to the end. This is a low volume example: on some major client projects complex think time needs sometimes involved 30 minutes ramp-ups!
But credit where credit is due, this Ramp-up did a good job of setting up the users on bases so that a constant rate of about 80 users per minute (it’s a low volume test) are finishing Journeys, once the ramp-up was done.
Dynamic Start approach – this alternative, has no ramp-up time at all, but achieves a mix of virtual users spread realistically instead by dynamically moving and adjusting think time: optimising the spread of users within a very short space of time by modulating think-time for the first run of each virtual user in the 400 concurrent virtual users that are quickly fired off:
It’s clear that this time the green line showing completed User Journeys per minute, gets up to the target 80 level much more quickly – within 30 rather than 300 seconds on the X-axis. Note: always in load testing whilst it’s easy to control what virtual load you put in, the most important metric is how many User Journeys can be completed per minute: that’s your capacity measurement that your merchandisers and sales team want to know can handle their forecast sales figures.
So the team were pleased by early experiments on that approach.
Dynamic Ramp approach
Results were already of interest, but it’s always good to have choice, and natural for clever software guys to makes things even better :<) so the team also experimented with a 3rd approach to the realism challenge – called for convenience Dynamic Ramping:
Looking at the graph, this has similar properties in terms of achieving rapid balancing the of virtual users, but as is just about visible on the graph, it does use a ramp-up time, albeit a very short one, and this time with a different approach to adjusting think time to fill the bases: based not on moving think time between steps, but on modulating it to best fit the first wave of 400 concurrent users across the journey steps.
As eCommerce is a fast changing technology world I’m always interested in how real business benefits can be gained from clever software guys – so apologies if this blog got a bit deep technically, into Test Performance Consultants territory.
But if your company already use our services, or you have a colleague who’s worked before at a retailer who has, be sure to ask and find out what deliverables were gained. Although it may be angles that they’ll bring to mind other than Ramp Up optimisation like the above, it’s very likely to be realism based features that gained the benefits.
Certainly across all the new clients this last quarter who’ve upgraded from simpler approaches to trying out some of ours, the common thread has been their desire for more realism, so that they are prepared with better facts to optimise their website conversions.