WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: Screen Reader tests after code validation

for

From: Hoffman, Allen
Date: Mar 9, 2010 1:24PM


> 1. How much added value is there in testing content in JAWS, after it
has
> been evaluated at the code/tag level using automated and manual
methods?
Very very little--and in fact it can be negative. For example, if the
code is right and JAWS (some version), has a bug, and then JAWS (another
version) is fixed, do you change code to make a version of JAWS work
right? Do you then test with another screen reader which may have other
"features"? If you have a solid process to consistently, accurately
test for Section 508 compliance, it is the AT responsibility to provide
the specific type of access.



> 2. If we are to add JAWS testing to our program, should we get JAWS
> Standard version, or JAWS Professional version?
go with the lowest cost you can. If you are not writing scripts you can
probably use the standard version--however, this varies with the
workstation platform you use.



> 3. Should JAWS evaluations be done for every word of every document
(even
> in larger documents), or is a policy of spot testing randomly selected

> content adequate?
Don't use JAWS to read documents at all it won't really resolve your
compliance problems.


> 4. Is the "JAWS for developers" training offered by SSB Bart (or some

> other vendor I do not know of) worth the cost - compared to
self-teaching
> based on the JAWS "help files?"
You only need developer training for JAWS if you plan to write scripts.
Get your developers to understand how the standards apply to their
products, and how to assess their products consistently.



>
> I'm also interested in any other "words of wisdom."

For Section 508 compliance, it is intended that the "standards" be
examined to determine applicable standards from the whole set, and then
utilize those which are applicable. It is not intended to only examine
one category for a product only. This means that Web and software
standards often both apply to content in general, based upon the
combination found.
Clearly understanding how to test interactive and noninteractive content
simultaneously is key to successfully having visibility in to your
Section 508 compliance, and accessibility overall. For example, flash
is so often intermixed with static HTML content anymore that it is
almost ubiquitous. Additionally, don't view "web 2.0" "dynamic"
interactive content differently than other stuff--it's just content with
interactive and noninteractive elements and can be evaluated
accordingly. It does not require a change of standards to assess Web
2.0 content, but does require a solid grasp of "how" to apply the
current standards for Web and software appropriately.



>


-----Original Message-----
From: <EMAIL REMOVED> [mailto: <EMAIL REMOVED> ]
Sent: Monday, March 08, 2010 11:20 PM
To: WebAIM Discussion List
Subject: Re: [WebAIM] Screen Reader tests after code validation

It is probably important to note that while JAWS is probably one of the
more
widely used screen readers there are others on the market that function
differently and While I use JaAWS exclusively I am told that other
screen
readers such as Window-Eyes or NVDA or Screen Access to go react and
respond
differently.
Chuck
----- Original Message -----
From: "Langum, Michael J" < <EMAIL REMOVED> >
To: "'WebAIM Discussion List'" < <EMAIL REMOVED> >
Sent: Monday, March 08, 2010 7:06 AM
Subject: [WebAIM] Screen Reader tests after code validation


> Until now, we have based our 508 testing and remediation on careful
> reviews of HTML code and PDF tags (rather than simply listening to a
> screen reader rendition of the content). We have assumed that if the
> content meets standards, and best practices, then it will be usable in

> JAWS.
>
> But I'm wondering if we should re-think this approach. Maybe a final
> "test with a screen reader" review would add more value than it would
cost
> in terms of additional time, software, hardware, and training.
>
> I am interested in the group's wisdom regarding:
>
> 1. How much added value is there in testing content in JAWS, after it
has
> been evaluated at the code/tag level using automated and manual
methods?
> 2. If we are to add JAWS testing to our program, should we get JAWS
> Standard version, or JAWS Professional version?
> 3. Should JAWS evaluations be done for every word of every document
(even
> in larger documents), or is a policy of spot testing randomly selected

> content adequate?
> 4. Is the "JAWS for developers" training offered by SSB Bart (or some

> other vendor I do not know of) worth the cost - compared to
self-teaching
> based on the JAWS "help files?"
>
> I'm also interested in any other "words of wisdom."
>
> -- Mike
>
>