WebAIM - Web Accessibility In Mind

E-mail List Archives

RE: testing web apps for accessibility

for

From: Mark Magennis
Date: Mar 21, 2006 3:00AM


Sam,

Looks like you have an interesting, challenging but very worthwhile job
there. Rejoice! We are so fortunate to have jobs like these.

I have a fair few ideas about tools and approaches to accessibility
evaluation. You may already be aware of many of these issues, but just
in case I'll dump most of my thoughts on you, so apologies if you
already know a lot of this.

So you're looking for the most efficient and effective tools and
processes. User testing is worth considering, as Kynn has suggested.
However, user testing can waste huge resources if done too soon. If you
try to user test an application with lots of technical barriers, you can
waste your time getting stuck in problem after problem that you already
know about and not end up learning much. User testing can be very
valuable (more of which later) but, in general, I think it is best to
get the application to a state where you think it will be mostly
functionally accessible. The best way to do that is auditing, perhaps
using tools like Bobby, perhaps not.

Beware of putting too much faith in things like Bobby though. It is not
possible to carry out a decent accessibility audit unless you are an
accessibility expert. No tools can replace experience and knowledge.
Tools like Bobby can speed up the process and batch test an entire site
to locate all instances of a particular code problem, but that's it.
People think that these tools are automated accessibility testers, but
in fact there is no such thing as an automated accessibility tester.
Even Bobby, which is now called WebXact, never was such a tool.
Applications like this can assist an auditor in carrying out an audit
more quickly or comprehensively, but they do very little except point
out places where problems may occur and run batch searches for missing
elements. In most cases it is up to the auditor to see whether there is
indeed a problem and, if so, what the solution might be.

Consider this - of the 17 priority 1 WCAG checkpoints, only one can be
identified automatically. Okay, maybe one and a half if you take it that
automated tool can find missing alt attributes, even though they cannot
detect poor or meaningless alt attributes.

But, assuming you are enough of an expert, you need to use whatever
process and tools you find most effective for the job. This varies a lot
from person to person. For example, some people may take a quick look at
the site and then dive straight into the source code looking for
specific things. Others might almost never go anywhere near the code.
Some go straight for a semi-automated auditing tool like Bobby whereas
others use such tools rarely, if at all. A lot of auditors now use
either the AIS accessibility toolbar for Internet Explorer which you can
download free from www.nils.org.au/ais/web/resources/toolbar/index.html
or the Web developer extension for Firefox which you can download free
from www.chrispederick.com/work/firefox/webdeveloper/. I believe they're
both quite similar, although I haven't got around to trying the Firefox
extension yet because I find the AIS IE toolbar does everything I need.
Someone else might have something to say about the differences. The IE
toolbar provides tools to speed up the auditing process by allowing you
to quickly view things like the table cell order, heading structure and
alt texts. It also allows you to toggle support for JavaScript, CSS,
ActiveX, etc. It gives quick access to all sorts of data about the page.
And it provides links to other tools such as code validators, colour
contrast analysers and semi-automated checkers like AccMonitor and
WebXact (which used to be called Bobby). I would recommend you spending
some time exploring these tools if you haven't already to work out when
and how they best help you with your audit. I think you would be far
better assessing what you currently have using these tools than
mirroring flat html versions of each screen and running Bobby against
each individually. "Ugh" as you say. You should be able to get all the
data you need to take to the product owners and developers that
identifies issues, allows for prioritizing fixes and helps steer future
development.

User testing is complementary to auditing. To be clear, I'm talking
about task-based user testing, in which a representative group of users
are observed carrying out representative tasks in a realistic situation
of use. What you get from this is very different from what you get from
an audit. The technical scope of a user test is nowhere near that of an
audit. You simply won't come across many of the potential problems
during a user test unless you employ hundreds of users at a cost of tens
of thousands of whatever currency you use. But a user test can reveal a
lot of the important usability issues that real users will face but
which even expert auditors may not have predicted. Another thing to
consider with user testing is that it can be compelling evidence for
owners and developers. Reading a technical report pointing out
accessibility issues is fine, but people often don't really "get it".
Sit those same people down at a user test and have them observe a real
person using their app and it can be very enlightening for them (or
video record the test and show them clips later). Even get them to talk
to the users about their experiences. Many developers and owners will
never actually have met real users with disabilities before, so a lot of
their concepts will not be based in reality. When they observe and talk
to real users, often the penny drops and they understand for the first
time what accessibility really means. That is one of the best ways of
generating interest, acceptance, understanding and therefore the buy-in
that is necessary for your work to be taken seriously. If you already
have buy-in from management and developers then perhaps this isn't so
important, but if not, consider using user testing as a demonstration
and awareness raising tool.

There's also a kind of half-way approach in which an accessibility
auditor tries to carry out real tasks using assistive technology. This
is kind of weird and I think there's little mileage to be got from it.
Others may disagree though. However, there is a problem if the auditor
is not a representative user, does not have a disability and does not
normally use that assistive technology. It may take a long time to learn
to use the technology and they will never use it the same way that a
disabled person who relies on it all day every day would. Also, their
state of knowledge of web sites and apps will give them a completely
different approach. Their experiences will therefore not be at all
representative of a real user.

Hope this helps,
Mark

Dr. Mark Magennis
Director of the Centre for Inclusive Technology (CFIT)
National Council for the Blind of Ireland
Whitworth Road, Dublin 9, Republic of Ireland
www.cfit.ie

<EMAIL REMOVED> tel: +353 (0)71 914 7464