{{ searchResult.published_at | date:'d MMMM yyyy' }}

Loading ...
Loading ...

Enter a search term such as “mobile analytics” or browse our content using the filters above.

No_results

That’s not only a poor Scrabble score but we also couldn’t find any results matching “”.
Check your spelling or try broadening your search.

Logo_distressed

Sorry about this, there is a problem with our search at the moment.
Please try again later.

Home pages for high street retailers may initially come across as similar, but their actual performance differs strongly due to the design of the navigation menus.

Clothes retailing has always been a competitive trade, even more so online. It is therefore important to know what is the key that makes or breaks the difference to competitors.

To find that out, Realeyes ran an eye-tracking test on some online clothes retailers. 49 people were asked to buy a pair of trousers while their interaction with the page was recorded with the latest eye-tracking solutions.

Menu design appeared to make a big difference in the overall performance of each page. A good example is set below by the menu on Debenhams' page:

The graphics of this menu element quickly grabbed attention (average time to view only 2.3 seconds) and it took people only 6.6 seconds on average to understand the content of the menu and click on the right item. The view-to-click chart below describes the results in detail.


 

Marks & Spencers' menu (below), on the other hand, turned out as a bad example of menu design.

It took people over 25% more time to locate the menu item with their eyes, but even more importantly it took people a whopping 13.7 seconds (on the internet that is a LOT of time) to actually make sense of the menu and click on any of the items there. The view-to-click chart below describes it in detail.

 

M&S very likely created this menu with best intentions, to give people more information. But testing with real users proved this load is clearly too much.

The following table summarises the overall performance of tested pages. It shows that forcing people to interact with too much information upfront significantly decreases important performance indicators such as success rate, time to completion and user satisfaction.

The bottom line is that findings like this are possible only through testing with real people. Incorporating eye-tracking into user tests is easy, but adds rich insights that can sharpen your competitive edge.

I'm happy to explain further here in the comments thread or via mihkel@realeyes.it.

Mihkel Jaatma is the co-founder of eyetracking specialists Realeyes.it .

Mihkel Jäätma

Published 2 October, 2008 by Mihkel Jäätma

6 more posts from this author

Comments (16)

Comment
No-profile-pic
Save or Cancel
Avatar-blank-50x50

Robert Stevens

The essence of a good home page is that people SHOULD NOT be forced to rely on a menu – particularly a global navigation bar at the top of the page to interact effectively with the site. Menus such as the Debenhams one are by their very nature low on information (being limited to single words or short phrases) and are often difficult to interpret.

A great home page provides good information rich contextual calls to action that are in page – these provide the target customers with examples of what they will find there and give them the confidence that following the call to action will propel them towards their goal. Consequently it would be expected that a good homepage based on good research would actually never require the visitor to look or act on a navigation menu – whilst a very poor page (and taken to extremes you could think of one with only a global nav and nothing else to engage attention) would force a user to rely solely on the information poor menu (as there is nothing else) and so would be quick to locate….

Enlightened site designers have been realising this for some time now – effective design is about content not menus – think blogs, think facebook, think MSN, think Youtube. In fact even more enlightened designers have done away with the global nav altogether: how would your ‘test’ have fared on the new BBC homepage? If you were comparing news or media sites?

There is more to effectiveness of complex interactive interfaces than meets the eye (pun intended) – you need to understand these, and make sure your ‘measurements’ are interpreted within a wider context – and you are comparing like with like. The conclusions you draw here are tenuous in the extreme. In addition the data charts you present make no sense, and the testing method (asking people to look for something they are not interested in or motivated to find) is highly suspect too.

Robert Stevens.
CET.
Think Eyetracking.

about 8 years ago

Mihkel Jäätma

Mihkel Jäätma, Founding Partner at Realeyes

Dear Rob

It is surprising you question basic foundations of any user testing. In many research settings people are recruited externally and asked to complete specific well defined tasks. Most obvious example of it probably being traditional usability labs themselves. What Realeyes provides above these proven methods is higher relevance. People are not put in rooms with white walls, but interact with web pages in more natural environment.

The two graphs above visualize two basic things: how fast people noticed the key navigation item and how fast they were able to comprehend the content of it.
I do not argue that everyone should be forced to interact with menus. It was a free choice of tested people.

That is the reason this post focused on menus, the differentiating factor could well have been something else.

Everything under the sun needs not to be eye-tracked. Highly customizable ‘entertainment’ web sites like the new BBC page might be one example.
Eye-tracking is more applicable for conversion oriented media. Like emails, landing pages, home pages, category pages, product pages…

Saying that, we have done some interesting work across news and media sites as well.

The business of those sites is often to maximize advertising revenues.
The objective of eye-tracking studies is therefore to maximize ad exposure.
Using different placement and ad formats, Realeyes has seen ad exposure range between 1-10% of total attention on a news site.
Quite a potential for improvement there!

Looking forward to further discussions

Mihkel Jäätma
Realeyes

about 8 years ago

Avatar-blank-50x50

Robert Stevens

Dear Mihkel,

I don’t question user testing per-se; I co-founded Bunnyfoot, one of the World’s largest and most successful Usability Consultancies. I have reservations about the approach you have taken and the findings you present.

The approach to user testing we pioneered at Bunnyfoot (PEEP) was externally validated by Lancaster University and published at HCI 2007. It is also features in E-consultancy 2008 Innovation Report.

My Co-founder at Bunnyfoot, Dr Jon Dodd, will be presenting a paper on the next advancement of PEEP, called PEEP Aloud, at the British Computer Society on the 8th October 2008. It will be recorded and uploaded to YouTube. May I recommend you watch it.

Regards,

Robert.

about 8 years ago

Avatar-blank-50x50

David Hamill

Hi,

Interesting article, thanks for writing it.

I'll let you guys argue out the intricacies of eyetracking amongst yourselves.

I see you used 49 participants in your test. How many people had you tested before it became obvious that the M&S navigation could be improved?

thanks
David Hamill

about 8 years ago

Mihkel Jäätma

Mihkel Jäätma, Founding Partner at Realeyes

Hello David

This is a very practical and relevant question that we hope to answer properly soon.

Currently we collect data in samples of 50 people before running the analysis just to be sure we have enough of it.
Very general explanation: http://docs.realeyes.it/why50.ppt

It’s worth noting that 50 people give you a sample of 2000-3000 eye fixations that is already quite a decent sample to work with.

The actual point of reaching reliable conclusions of course differs across designs.
More complex designs demand larger samples. (and simple clearly less)

We have a fairly good idea how to index complexity of design and build a function to find the minimum reliable sample size upfront. We also have the sufficient eye-tracking database to run it against and identify the exact relation.
Someone just has to spend some man-hours on it.

I’ll definitely post the results here in E-Consultancy as soon as we have them!

about 8 years ago

Chris Lake

Chris Lake, CEO at Empirical Proof

Great post Mihkel. It follows on from Graham Charlton's findings in April on '10 things M&S could do better online', which also identified the dropdown as a 'cumbersome' area of concern: http://www.e-consultancy.com/news-blog/365290/10-things-m-s-can-do-better-online.html

I also hear that dropdowns are bad for the iPhone ; )

about 8 years ago

Avatar-blank-50x50

David Hamill

Thanks I've read the Why 50 presentation. It makes sense to me that you need a fairly large sample if you want to rely on summative data like this.

I assume that the menu became obvious via direct observation of the first few participants. Was this the case?

about 8 years ago

Avatar-blank-50x50

David Hamill

Sorry I meant to say the 'menu issue' where it says 'menu'.

about 8 years ago

Mihkel Jäätma

Mihkel Jäätma, Founding Partner at Realeyes

Well, Graham Charlton apparently came to that conclusion without any user testing! :)

We did not focus on menus from the outset. Our objective was to measure the overall performance of different designs. For the metrics presented in the summary table above we still need a reliable sample. Does it make sense?

about 8 years ago

Graham Charlton

Graham Charlton, Editor in Chief at ClickZ Global

Hi Mikhel,

I did - I'm not a fan of drop-down menus anyway, but that one is particularly bad.

As for Robert's point about menus - I think a decent navigation bar is essential, especially as many users may just want to browse a site by department / category.

Also, as many people will not necessarily arrive at a site through the homepage, a good menu bar makes it easier to continue browsing.

about 8 years ago

Avatar-blank-50x50

David Hamill

Yes it makes perfect sense thanks. But it seems that an observer might have spotted the problem early on in the study (and perhaps this is what happened) and the other 40 odd people were there to make the quantitative data relevant.

about 8 years ago

Avatar-blank-50x50

Ceri Heathcote

I definitely agree with this. I think simplicity is the key. There seems to be so many websites that look so complicated it is difficult to find what you want.

about 8 years ago

Avatar-blank-50x50

Jon Dodd

Hi - Thanks for posting this - I have a couple of queries that may just be my misreading of your data and reported approach - perhaps you could clarify?

Regarding your sample size (reported as 49 in this study) - from my reading of the graphs (I counted the number of data points (peaks and troughs from left to right) although I may be wrong) it appears that there were 36 data points (people?) for Debenhams and only 23 for M&S. You also tested Next and BHS too - how many for these two stores?

Could you please also explain how the participants were mapped to the individual online stores? i.e. since 23+36 is more than 49 presumably each participant saw more than one store (but not systematically?) and therefore there would potentially be bias due to order effects (and potentially other effects due to it not being a strict repeated measures design). What controls did you put in place to control for this / what do you think the effect of this might be on your measurements.

Also I agree with you that 50 is a good basic number to test with for complex visual stimuli (typically homepages) that have multiple competing distractors (in other words quite a lot visually going on). This is based on the dynamics of fixations, and saccades, decision making and time to action estimates (as well as modelling and statistical theory) and I like your basic demostration of this in your why50 article - in which case why in this study did you choose to report results and measurements that were from fewer participants? In fact if you could give just give the number of people per store then it would be possible to calculate whether any of the numbers you report in your tables are statistically significantly different.

thanks

Jon

about 8 years ago

Mihkel Jäätma

Mihkel Jäätma, Founding Partner at Realeyes

Hello Jon

Yes, you have counted the data points on both charts pretty accurately.
If you want to see exact distribution of all clicks in this test, you can look at page 7 on the document that opens up if you click on the conclusion chart.

The 49 people who we tested were free to use the website however they wanted, not all of them went on to use menus. Hence different amount of data points in each chart.

In terms of test set up, the same 49 people were asked to use all 4 web sites. The order of web sites in test was randomized to eliminate learning biases.

You’re correct that we could calculate confidence intervals for any conclusions we make about the difference of variables. But it’s worth keeping in mind that here we are talking of commercial research, not academic working papers.

The test presented here is a product that needs to be turned around in only few business days. Web teams commissioning our research are extremely busy people who can give the results only limited time-span, before moving on.

In that context, adding even simple standard deviations is likely to be overkill and complicate the consumption of the report more than add real value to it.

Best regards,
Mihkel

about 8 years ago

Avatar-blank-50x50

Alexey Kopylov

Mihkel -- what does mean the horizontal axis on this duagrams?

over 7 years ago

Mihkel Jäätma

Mihkel Jäätma, Founding Partner at Realeyes

Hello Alexey - horizontal axis represents individual participants that used the menu in the text.

We had in total 49 people taking part, but only subset of them actually used the menus that are analysed here. If I remember correctly this was close to 40 indivisudal in case of Debenhams and something above 20 in M&S.

over 7 years ago

Comment
No-profile-pic
Save or Cancel
Daily_pulse_signup_wide

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Daily Pulse newsletter. Each weekday, you ll receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.