Date: 22nd July 2013
by Deri Jones, SciVisum: first published at: Tnooz.com 8 August 2012
Looking at websites across multiple industry sectors often enables us to pull out some useful best practise lessons, especially with regards to website performance.
Here are some that came out of a recent assessment for a travel client.
Some of the findings were surprising. A major one highlighted that it’s often possible to make quick improvements in your conversion ratios with small technical tweaks.
Analysing performance over time instead of on an ad-hoc basis as you are fire-fighting them gives you the ability to see meaningful data.
Collating together the small blips and burps on your core travel User Journeys over time, you will surprised to see it suddenly form coherent patterns that technical teams can work with instead of being a collection of seemingly random, unrelated errors.
For example, one project was asking:
“Why would people abandon a travel purchase on-line?”
Our remit was to see if the occasional visitor comments about problems in this website’s holiday search and payment User Journeys were due to real, underlying technical problems in the website or just problems with a visitor’s own PC.
And, if so, what percentage of users were being affected and what could be done to resolve the root cause?
Cloud suppliers for images – not always a benefit
The first key lesson to take away from the project was that images matter more than you realise. Even more than the marketing department (who do love pretty pictures !) realise.
Additional findings highlighted that letting a third party cloud supplier handle your images may actually undermine your on-line brand, even while saving time or money.
If it impacts delivery on your website, confidence in your website and brand will surely suffer and so will your bottom-line.
24/7 website monitoring using meaningful User Journeys, showed that some of the images used three or four pages deep into a search and buy journey, were suffering from a regular problem that meant visitors not in-frequently experienced a page with missing images.
It happened three out of seven nights each week, at a regular 23:20 time slot. It was solved by presenting evidence to the third party hosting service of the images.
There was no need to talk deep technical measures, once the supplier knew that their customer knew, they fixed it quickly. It was obviously something they were aware of and keeping quiet about!
Other recent independent research has found that about 60% of holiday buyers say photos of the destination and accommodation help them to choose a holiday, so clearly multimedia is important.
Prices – would you know if your online prices change before the users’ eyes?!
The second problem was a shock for the managers: dynamic user journey monitoring showed that for about 5% of its packages, the price would change before the visitor’s eyes.
The company’s simple website availability metrics had of course not revealed this issue. The first price for a package would not match the final price configured in the shopping basket.
The price differences were small. And fixed quickly when the client’s tech team were informed of which specific products suffered the problem.
Only by using user journeys that are also checking the details of every page such as pricing, will find this out: so when talking to your tech team or monitoring supplier: be sure you get to see the spec in use, and that you are happy it is realistic enough
Security – and one extreme cost of inadequate load testing tools
Our last finding was less of a lost sales problem than a security problem that meant a potential fraud issue.
Given the time lapse between a holiday purchase, and it being taken, it was unlikely that any real fraud had taken place to date but it did explain a small number of problematic orders where stolen or inactive credit cards had been used and apparently passed the checking stages.
The software bug had apparently been introduced intentionally some months back, during an in-house website load testing project.
The load testing tool the internal team had used was not up to the task of performing the complex AJAX powered steps that the site was using, so the testers had coded a short-cut, to allow the testing journey to bypass the tricky AJAX step in the journey.
Subsequently the team forgot to roll back the short-cut. Ouch.
The motto here is that if the website testing tool is not capable, so much so that your website load testing itself is undermined by the need to recode some of your pages just to make them testable, or that changes made to your code severely impact your website’s robustness if they don’t get rolled back, then you are not really testing your site.
Some pages are not the slowest on average: but are the slowest under heavier load conditions
Lastly, there was a lot of actionable data about the slowest pages in the whole journey – or rather the one page that was on average no worse than others, but under busy periods it suffered a big slow-down.
The fact that the page’s performance, on average, was OK almost caused it to be overlooked. But when analysing down from some of the timeperiods when critical Journeys had been slowest: it was spotted.
A bad page slowdown hidden among fast pages can really lose sales, as a recent Akamai survey reported:
“A third of travellers would be less likely to visit a site after experiencing technical problems like slowness or errors on the page. Business travellers are slightly more likely to have a negative reaction.”