E-mail List Archives
Thread: testing web apps for accessibility
Number of posts in this thread: 7 (In chronological order)
From: Sam Foster
Date: Mon, Mar 20 2006 9:40PM
Subject: testing web apps for accessibility
No previous message | Next message →
hi all,
I have quite a bit of experience developing and testing accessible
content-driven web sites, but less in the application area.
We have a suite of web apps ranging from really inaccessible to almost
passable that we want to test for accessibility (section 508 compliance
specifically) over the course of the year. The purpose is to establish a
baseline to measure future improvements against, and obviously to
identify areas that need work and any successes that shouldnt be thrown
out with the bath water. These are complex apps with hundreds, possibly
thousands of screens. We are a small team with lots of other duties.
Using the standalone version of Bobby we immediately hit problems on the
first screens of the first app. It wouldnt authenticate properly,
failing to fill in the login form with the configured name/password. And
when I think we got it past that hurdle it stopped in its tracks: pretty
much every link in the main console uses the javascript: protocol, and
launches a popup, and it didnt follow any of them.
Now, I recognize that's some black marks right there, but our task
remains nonetheless. Our workaround currently is to mirror flat html
versions of each screen and run Bobby against each individually. Ugh.
Does anyone have any suggestions on how to proceed, other tools or
processes? I need some data I can take to the product owners and
developers that identifies issues, allows for prioritizing fixes and
helps steer future development.
There's other points that the html-analysing tools cant catch that I'd
like to - like text size and contast, like the size of "hit targets"
that need to be clicked/dragged, appropriate color usage.. are there
tools or processes out there that can help at all in capturing these
kind of issues? Do any of the accessibility consultants have sample
reports available that might provide guidance? Again, for a single page
I'd know how to go about this, but I'm looking for a bigger hammer.
We dont have budget to put this out to an outside firm this time around,
though that's probably that's the way to go. I want to start the ball
rolling and demonstrate both the value of testing for and developing for
accessibility. I also want to understand this field better personally,
so I'm reluctant to farm it out entirely.
thanks for any advice,
Sam
thanks
Sam
From: Kynn Bartlett
Date: Mon, Mar 20 2006 10:00PM
Subject: Re: testing web apps for accessibility
← Previous message | Next message →
Unless Bobby testing is mandated somehow, why not just do user testing using
actual live users with a variety of disabilities and assistive technologies?
--Kynn
On 3/20/06, Sam Foster < = EMAIL ADDRESS REMOVED = > wrote:
>
> Using the standalone version of Bobby we immediately hit problems on the
> first screens of the first app. It wouldnt authenticate properly,
> failing to fill in the login form with the configured name/password.
From: Mark Magennis
Date: Tue, Mar 21 2006 3:00AM
Subject: RE: testing web apps for accessibility
← Previous message | Next message →
Sam,
Looks like you have an interesting, challenging but very worthwhile job
there. Rejoice! We are so fortunate to have jobs like these.
I have a fair few ideas about tools and approaches to accessibility
evaluation. You may already be aware of many of these issues, but just
in case I'll dump most of my thoughts on you, so apologies if you
already know a lot of this.
So you're looking for the most efficient and effective tools and
processes. User testing is worth considering, as Kynn has suggested.
However, user testing can waste huge resources if done too soon. If you
try to user test an application with lots of technical barriers, you can
waste your time getting stuck in problem after problem that you already
know about and not end up learning much. User testing can be very
valuable (more of which later) but, in general, I think it is best to
get the application to a state where you think it will be mostly
functionally accessible. The best way to do that is auditing, perhaps
using tools like Bobby, perhaps not.
Beware of putting too much faith in things like Bobby though. It is not
possible to carry out a decent accessibility audit unless you are an
accessibility expert. No tools can replace experience and knowledge.
Tools like Bobby can speed up the process and batch test an entire site
to locate all instances of a particular code problem, but that's it.
People think that these tools are automated accessibility testers, but
in fact there is no such thing as an automated accessibility tester.
Even Bobby, which is now called WebXact, never was such a tool.
Applications like this can assist an auditor in carrying out an audit
more quickly or comprehensively, but they do very little except point
out places where problems may occur and run batch searches for missing
elements. In most cases it is up to the auditor to see whether there is
indeed a problem and, if so, what the solution might be.
Consider this - of the 17 priority 1 WCAG checkpoints, only one can be
identified automatically. Okay, maybe one and a half if you take it that
automated tool can find missing alt attributes, even though they cannot
detect poor or meaningless alt attributes.
But, assuming you are enough of an expert, you need to use whatever
process and tools you find most effective for the job. This varies a lot
from person to person. For example, some people may take a quick look at
the site and then dive straight into the source code looking for
specific things. Others might almost never go anywhere near the code.
Some go straight for a semi-automated auditing tool like Bobby whereas
others use such tools rarely, if at all. A lot of auditors now use
either the AIS accessibility toolbar for Internet Explorer which you can
download free from www.nils.org.au/ais/web/resources/toolbar/index.html
or the Web developer extension for Firefox which you can download free
from www.chrispederick.com/work/firefox/webdeveloper/. I believe they're
both quite similar, although I haven't got around to trying the Firefox
extension yet because I find the AIS IE toolbar does everything I need.
Someone else might have something to say about the differences. The IE
toolbar provides tools to speed up the auditing process by allowing you
to quickly view things like the table cell order, heading structure and
alt texts. It also allows you to toggle support for JavaScript, CSS,
ActiveX, etc. It gives quick access to all sorts of data about the page.
And it provides links to other tools such as code validators, colour
contrast analysers and semi-automated checkers like AccMonitor and
WebXact (which used to be called Bobby). I would recommend you spending
some time exploring these tools if you haven't already to work out when
and how they best help you with your audit. I think you would be far
better assessing what you currently have using these tools than
mirroring flat html versions of each screen and running Bobby against
each individually. "Ugh" as you say. You should be able to get all the
data you need to take to the product owners and developers that
identifies issues, allows for prioritizing fixes and helps steer future
development.
User testing is complementary to auditing. To be clear, I'm talking
about task-based user testing, in which a representative group of users
are observed carrying out representative tasks in a realistic situation
of use. What you get from this is very different from what you get from
an audit. The technical scope of a user test is nowhere near that of an
audit. You simply won't come across many of the potential problems
during a user test unless you employ hundreds of users at a cost of tens
of thousands of whatever currency you use. But a user test can reveal a
lot of the important usability issues that real users will face but
which even expert auditors may not have predicted. Another thing to
consider with user testing is that it can be compelling evidence for
owners and developers. Reading a technical report pointing out
accessibility issues is fine, but people often don't really "get it".
Sit those same people down at a user test and have them observe a real
person using their app and it can be very enlightening for them (or
video record the test and show them clips later). Even get them to talk
to the users about their experiences. Many developers and owners will
never actually have met real users with disabilities before, so a lot of
their concepts will not be based in reality. When they observe and talk
to real users, often the penny drops and they understand for the first
time what accessibility really means. That is one of the best ways of
generating interest, acceptance, understanding and therefore the buy-in
that is necessary for your work to be taken seriously. If you already
have buy-in from management and developers then perhaps this isn't so
important, but if not, consider using user testing as a demonstration
and awareness raising tool.
There's also a kind of half-way approach in which an accessibility
auditor tries to carry out real tasks using assistive technology. This
is kind of weird and I think there's little mileage to be got from it.
Others may disagree though. However, there is a problem if the auditor
is not a representative user, does not have a disability and does not
normally use that assistive technology. It may take a long time to learn
to use the technology and they will never use it the same way that a
disabled person who relies on it all day every day would. Also, their
state of knowledge of web sites and apps will give them a completely
different approach. Their experiences will therefore not be at all
representative of a real user.
Hope this helps,
Mark
Dr. Mark Magennis
Director of the Centre for Inclusive Technology (CFIT)
National Council for the Blind of Ireland
Whitworth Road, Dublin 9, Republic of Ireland
www.cfit.ie
= EMAIL ADDRESS REMOVED = tel: +353 (0)71 914 7464
From: Tim Harshbarger
Date: Tue, Mar 21 2006 6:40AM
Subject: RE: testing web apps for accessibility
← Previous message | Next message →
Sam,
The majority of accessibility work I do is with web-based applications.
Unfortunately, there isn't any "easy" way to provide an in-depth
evaluation of a web app.
For whatever my advice is worth, I also think Kynn has a good
suggestion. If you have 2 to 4 users with disabilities go through some
kind of usability script for a specific application, it should let you
find the majority of major accessibility problems. You will need to
decide what to do if a user becomes stuck in some part of the app that
is inaccessible while performing a task. For example, if the task is to
sign up online for a course at the local college and the user is stuck
on the second page, when do you intervene and what do you do then? Do
you let the user flounder for 10-12 minutes before intervening? When
you intervene, do you take the user to the next step in the task or go
to a new task?
Of course, having users with disabilities performing tasks with the app
won't find all the problems in the app, but it should find the major
problems with performing those tasks.
Another thing to remember is that applications tend to have a consistent
user interface. To increase efficiency, you only need to test sections
of the interface that are unique or that are representative of a
repeated user interface. For example, if the user enters invalid data
or selects an invalid option, the app will have some method of informing
the user of the error. Typically, the errors will be reported in a
consistent manner. The app is unlikely to generate a dialog box for one
error and then put a error indicator next to a field for another
error--unless they consider them to be different categories of errors.
If you are concerned that the developers may only fix those parts of the
app you tested, then make certain the parts of the app you test include
all the most important functionality. At a later date, after they have
fixed parts of the app you cited in your report, you can start
evaluating additional areas of their app.
Also, for every accessibility problem you find, you should describe a
solution (with code examples) if at all possible. Most developers are
under a time crunch. They don't necessarily have the time to research
solutions. If you give them the option to use a defined solution, you
make it much easier for them to fix the problem within whatever time
constraints they have.
Thanks,
Tim
From: Sam Foster
Date: Tue, Mar 21 2006 10:50PM
Subject: Re: testing web apps for accessibility
← Previous message | Next message →
Mark Magennis wrote:
>Sam,
>
>Looks like you have an interesting, challenging but very worthwhile job
>there. Rejoice! We are so fortunate to have jobs like these.
>
>I have a fair few ideas about tools and approaches to accessibility
>evaluation. You may already be aware of many of these issues, but just
>in case I'll dump most of my thoughts on you, so apologies if you
>already know a lot of this.
>
>
>
thank you, this was all really useful. So, given we are not testing for
success, I think an audit better describes what I'm trying to do here:
get some insight into what problems exists, the scale of the problem,
and produce next steps - which might include user testing. I think what
I might do is ask the customer care folks to see if there are any real
scenarios that they've logged which I can use to guide this.
>If you
>try to user test an application with lots of technical barriers, you can
>waste your time getting stuck in problem after problem that you already
>know about and not end up learning much.
>
that's exactly where I'm at now, and what I'm trying to work around.
Maybe if I reduce the scope drastically, and just address a handful of
representative screens from a sample workflow I can get a more rounded
report that takes the section 508 and WCAG guidelines and provides some
detail on where it fails, and how it could be remedied.
>A lot of auditors now use
>either the AIS accessibility toolbar for Internet Explorer which you can
>download free from www.nils.org.au/ais/web/resources/toolbar/index.html
>or the Web developer extension for Firefox which you can download free
>from www.chrispederick.com/work/firefox/webdeveloper/.
>
I've used both for some time, but hadn't considered really using them
here because of the manual (and subjective) process it implies. But I
think your point is that this is a necessarily manual and subjective
process, with quality of results being more important that quantity.
Thank you again for such a thoughtful post,
Sam Foster
From: Sam Foster
Date: Tue, Mar 21 2006 11:00PM
Subject: Re: testing web apps for accessibility
← Previous message | Next message →
Kynn Bartlett wrote:
> Unless Bobby testing is mandated somehow, why not just do user testing
> using actual live users with a variety of disabilities and assistive
> technologies?
For the reasons that Mark outlined - we've already ascertained that its
pretty broken, and user testing would be a lengthy and costly exercise
in frustration and futility at this point. Its something I'd love to do
though on a product that we felt more sure of. Using Bobby isnt mandated
in anyway, its basically a yes/now answer they're looking for (is or
isnt it accessible). But a "no" will be quickly followed by "no in what
way, and what do we need to do to fix it".
Sam
From: Sam Foster
Date: Tue, Mar 21 2006 11:10PM
Subject: Re: testing web apps for accessibility
← Previous message | No next message
>Of course, having users with disabilities performing tasks with the app
>won't find all the problems in the app, but it should find the major
>problems with performing those tasks.
>
>
>
What troubles me with this is, if the test users do stumble through all
of some of the application, how do I interpret this result - does that
mean it is accessible? In other words, how do I know if these users are
*representative*, when we are talking about a huge potential range of
disabiliies and circumstances that could all highlight different
problems. I realize there's no such thing as certifying something as
accessible, but I'd want to avoid misleading results.
>Another thing to remember is that applications tend to have a consistent
>user interface. To increase efficiency, you only need to test sections
>of the interface that are unique or that are representative of a
>repeated user interface.
>
Very true.
Thanks again,
Sam