WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: External Link Icons


From: Hausler,Jesse
Date: Oct 1, 2007 5:50PM

Many usability studies, especially those using the Think Aloud protocol, are by method observational studies. Testers observe users completing a list of tasks and (if necessary) prompt them to speak their thoughts out-loud. If a user becomes stuck, testers watch the users attempt to become unstuck. Actions, words, tone, and mood are then analyzed. User patterns and points of difficulty are noted. Design changes are made and the process repeats.

In a perfect usability world, users will always know where they are within a site or system, where they need to go to accomplish their task, and how to get there. The steps to complete the task will make sense cognitively, and they will receive proper feedback after the task is complete. If they do encounter error, which they shouldn't, they will be able to easily back out of the error, or undo the parts that did not go properly.

When this dream scenario does not play out properly it always becomes the users "problem", since they are the ones unable to complete their given task. The "burden" is on the designer to ensure this does not happen. A developer who assumes inline text stating a link will go external and places this burden on the user, ignores one of the main web usability heuristics...
"Users Don't Read"

The question posed by Mr. Groves' study was, "Which of these three methods will users best recognize for encountering an external link?" It appears that the "speedbump" page yielded the best results given his sample. The questions "Why do users would want to know that they are leaving a site?", or "Why do site owners want users to stay on their site?", were not asked and are not being asked.

As far as what makes a sample representative, the rules in Usability testing are different from standard empirical studies. Jakob Nielson wrote in 1993 that the magic number for usability testing was 5 users. This was mainly due to the need to save costs for companies who were weary of usability studies. Also noted is that returns diminish at about 10 users.

Laurie Faulker wrote in 2004 that the number should be 15. With ten more users, you are less likely to miss usability defects in a given round of testing.
PDF: http://www.geocities.com/faulknerusability/Faulkner_BRMIC_Vol35.pdf

A good analysis of both thoughts can be found here:

The best method might be a combination of sorts. Given the resources, I would test using an external link icon along with a speedbump page. Not that it was mentioned, but I would recommend against a timer on the speedbump page. Upon reaching the speedbump, If the user does not want to leave the site, the time may not give them enough time to undo their previous move.

Jesse Hausler
Assistive Technology Resource Center
Colorado State University