WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: Screen Reader tests after code validation

for

From: Karl Groves
Date: Mar 8, 2010 10:09AM


(top-posting mostly because this is a general response to Mike's message).

Mike,

One of the challenges of doing accessibility testing is that no single
method is sufficiently reliable, accurate, or cost-effective. Any single
approach (i.e. automated, manual code review, manipulating browser/
hardware settings, testing with assistive technologies, etc.) will have
some significant advantages and some significant disadvantages. In a web
context, automated testing is an excellent way to pour through large
volumes of markup quickly. Unfortunately, you can only reliably get 20%
coverage of the overall number of things to look for. On the other hand,
manual code review allows a qualified tester to not only be able to find
problems but also recommend the necessary changes. However, manual code
review can has its limitations as well.

There are similar advantages and disadvantages with testing with assistive
technologies. I believe others have done a good job of discussing what
they are. The best approach is to come up with a test methodology which
takes advantage of each type of testing in a way that gets you highly
accurate, reliable, and repeatable results as quickly as possible.
Perform automated testing for those things which automated testing can
find reliably. Perform manual testing for things automated testing can't
find. Then, if you have qualified staff, do functional performance testing
with assistive technologies.

As for SSB's training, I'll refrain from response on that because I'm
clearly biased on that one, and simply thank Andrew for his kind words.
;-) If you want further details, feel free to give me a shout.






Karl Groves
Director of Strategic Planning and Product Development
SSB BART Group
<EMAIL REMOVED>
703.637.8961 (o)
443.517.9280 (c)
http://www.ssbbartgroup.com

Accessibility-On-Demand

Register Now to Attend Free Accessibility Webinars:
https://www.ssbbartgroup.com/webinars.php


> -----Original Message-----
> From: <EMAIL REMOVED> [mailto:webaim-forum-
> <EMAIL REMOVED> ] On Behalf Of Langum, Michael J
> Sent: Monday, March 08, 2010 10:06 AM
> To: 'WebAIM Discussion List'
> Subject: [WebAIM] Screen Reader tests after code validation
>
> Until now, we have based our 508 testing and remediation on careful
> reviews of HTML code and PDF tags (rather than simply listening to a
> screen reader rendition of the content). We have assumed that if the
> content meets standards, and best practices, then it will be usable in
> JAWS.
>
> But I'm wondering if we should re-think this approach. Maybe a final
> "test with a screen reader" review would add more value than it would
> cost in terms of additional time, software, hardware, and training.
>
> I am interested in the group's wisdom regarding:
>
> 1. How much added value is there in testing content in JAWS, after it
> has been evaluated at the code/tag level using automated and manual
> methods?
> 2. If we are to add JAWS testing to our program, should we get JAWS
> Standard version, or JAWS Professional version?
> 3. Should JAWS evaluations be done for every word of every document
> (even in larger documents), or is a policy of spot testing randomly
> selected content adequate?
> 4. Is the "JAWS for developers" training offered by SSB Bart (or some
> other vendor I do not know of) worth the cost - compared to self-
> teaching based on the JAWS "help files?"
>
> I'm also interested in any other "words of wisdom."
>
> -- Mike
>
>