WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: Screen Reader tests after code validation

for

From: Léonie Watson
Date: Mar 8, 2010 9:33AM


1. How much added value is there in testing content in JAWS, after it has been evaluated at the code/tag level using automated and manual methods?

If you can, building some user testing into your development plan can certainly add value. Following web standards, and conducting accessibility checks will get you a good way towards your goal, but user testing can really take things to another level.

2. If we are to add JAWS testing to our program, should we get JAWS Standard version, or JAWS Professional version?

Jaws is only one of many screen readers on the market. As noted below, I'd be cautious about conducting this kind of testing yourself. If you do want to experiment informally though, try NVDA. It's open source and quite a capable option:
http://www.nvda-project.org

3. Should JAWS evaluations be done for every word of every document (even in larger documents), or is a policy of spot testing randomly selected content adequate?

If you can test a representative sample of pages/content types, that's a good place to start. Working through key user journeys is another useful approach.

4. Is the "JAWS for developers" training offered by SSB Bart (or some other vendor I do not know of) worth the cost - compared to self-teaching based on the JAWS "help files?"

I would urge caution about conducting this kind of testing yourself. Unless you are a full time screen reader user, it's unlikely you'll be able to simulate the same experience that a full time screen reader user would have. Naturally, this can lead to some erroneous results creeping in.



Regards,
Léonie.

--
Nomensa - humanising technology

Léonie Watson | Director of Accessibility
t. +44 (0)117 929 7333