E-mail List Archives
Screen Reader tests after code validation
From: Langum, Michael J
Date: Mar 8, 2010 9:06AM
- Next message: Karlen Communications: "Re: Screen Reader tests after code validation"
- Previous message: Steven Faulkner: "Re: ARIA applications and document text"
- Next message in Thread: Karlen Communications: "Re: Screen Reader tests after code validation"
- Previous message in Thread: None
- View all messages in this Thread
Until now, we have based our 508 testing and remediation on careful reviews of HTML code and PDF tags (rather than simply listening to a screen reader rendition of the content). We have assumed that if the content meets standards, and best practices, then it will be usable in JAWS.
But I'm wondering if we should re-think this approach. Maybe a final "test with a screen reader" review would add more value than it would cost in terms of additional time, software, hardware, and training.
I am interested in the group's wisdom regarding:
1. How much added value is there in testing content in JAWS, after it has been evaluated at the code/tag level using automated and manual methods?
2. If we are to add JAWS testing to our program, should we get JAWS Standard version, or JAWS Professional version?
3. Should JAWS evaluations be done for every word of every document (even in larger documents), or is a policy of spot testing randomly selected content adequate?
4. Is the "JAWS for developers" training offered by SSB Bart (or some other vendor I do not know of) worth the cost - compared to self-teaching based on the JAWS "help files?"
I'm also interested in any other "words of wisdom."
-- Mike
- Next message: Karlen Communications: "Re: Screen Reader tests after code validation"
- Previous message: Steven Faulkner: "Re: ARIA applications and document text"
- Next message in Thread: Karlen Communications: "Re: Screen Reader tests after code validation"
- Previous message in Thread: None
- View all messages in this Thread