WebAIM - Web Accessibility In Mind

E-mail List Archives

Thread: Screen Reader tests after code validation

for

Number of posts in this thread: 11 (In chronological order)

From: Langum, Michael J
Date: Mon, Mar 08 2010 9:06AM
Subject: Screen Reader tests after code validation
No previous message | Next message →

Until now, we have based our 508 testing and remediation on careful reviews of HTML code and PDF tags (rather than simply listening to a screen reader rendition of the content). We have assumed that if the content meets standards, and best practices, then it will be usable in JAWS.

But I'm wondering if we should re-think this approach. Maybe a final "test with a screen reader" review would add more value than it would cost in terms of additional time, software, hardware, and training.

I am interested in the group's wisdom regarding:

1. How much added value is there in testing content in JAWS, after it has been evaluated at the code/tag level using automated and manual methods?
2. If we are to add JAWS testing to our program, should we get JAWS Standard version, or JAWS Professional version?
3. Should JAWS evaluations be done for every word of every document (even in larger documents), or is a policy of spot testing randomly selected content adequate?
4. Is the "JAWS for developers" training offered by SSB Bart (or some other vendor I do not know of) worth the cost - compared to self-teaching based on the JAWS "help files?"

I'm also interested in any other "words of wisdom."

-- Mike

From: Karlen Communications
Date: Mon, Mar 08 2010 9:18AM
Subject: Re: Screen Reader tests after code validation
← Previous message | Next message →

The risk you run is that documents become one adaptive technology dependent.
It also does not take into consideration the skill of the end-user, the
version of AT or browser/Adobe Reader. So if you are at any time indicating
that a document will "work with an adaptive technology" you are risking a
refocus on the AT not the creation of a compliant document.

I use JAWS - am a JAWS user - as part of a QA in that I can check links,
headings, form controls using the keyboard commands JawsKey + F5, F6 and F7
respectively and will look at potentially problem areas such as complex
tables but the bulk of my working with web sites and PDF documents as with
Office documents is focusing on the standards and guidelines for an
accessible document not that it will 'work with JAWS" or ZoomText or
Window-Eyes, or ....

What I check for is that I haven't missed something or that the headings are
in hierarchical order...so a synopsis of what I've done rather than a test
to see if it will work with the AT.

Just a cautionary note when bringing a specific AT into the process.

Cheers, Karen

-----Original Message-----
From: = EMAIL ADDRESS REMOVED =
[mailto: = EMAIL ADDRESS REMOVED = ] On Behalf Of Langum, Michael J
Sent: March-08-10 10:06 AM
To: 'WebAIM Discussion List'
Subject: [WebAIM] Screen Reader tests after code validation

Until now, we have based our 508 testing and remediation on careful reviews
of HTML code and PDF tags (rather than simply listening to a screen reader
rendition of the content). We have assumed that if the content meets
standards, and best practices, then it will be usable in JAWS.

But I'm wondering if we should re-think this approach. Maybe a final "test
with a screen reader" review would add more value than it would cost in
terms of additional time, software, hardware, and training.

I am interested in the group's wisdom regarding:

1. How much added value is there in testing content in JAWS, after it has
been evaluated at the code/tag level using automated and manual methods?
2. If we are to add JAWS testing to our program, should we get JAWS
Standard version, or JAWS Professional version?
3. Should JAWS evaluations be done for every word of every document (even
in larger documents), or is a policy of spot testing randomly selected
content adequate?
4. Is the "JAWS for developers" training offered by SSB Bart (or some
other vendor I do not know of) worth the cost - compared to self-teaching
based on the JAWS "help files?"

I'm also interested in any other "words of wisdom."

-- Mike

From: Léonie Watson
Date: Mon, Mar 08 2010 9:33AM
Subject: Re: Screen Reader tests after code validation
← Previous message | Next message →

1. How much added value is there in testing content in JAWS, after it has been evaluated at the code/tag level using automated and manual methods?

If you can, building some user testing into your development plan can certainly add value. Following web standards, and conducting accessibility checks will get you a good way towards your goal, but user testing can really take things to another level.

2. If we are to add JAWS testing to our program, should we get JAWS Standard version, or JAWS Professional version?

Jaws is only one of many screen readers on the market. As noted below, I'd be cautious about conducting this kind of testing yourself. If you do want to experiment informally though, try NVDA. It's open source and quite a capable option:
http://www.nvda-project.org

3. Should JAWS evaluations be done for every word of every document (even in larger documents), or is a policy of spot testing randomly selected content adequate?

If you can test a representative sample of pages/content types, that's a good place to start. Working through key user journeys is another useful approach.

4. Is the "JAWS for developers" training offered by SSB Bart (or some other vendor I do not know of) worth the cost - compared to self-teaching based on the JAWS "help files?"

I would urge caution about conducting this kind of testing yourself. Unless you are a full time screen reader user, it's unlikely you'll be able to simulate the same experience that a full time screen reader user would have. Naturally, this can lead to some erroneous results creeping in.



Regards,
Léonie.

--
Nomensa - humanising technology

Léonie Watson | Director of Accessibility
t. +44 (0)117 929 7333


-----Original Message-----
From: = EMAIL ADDRESS REMOVED = [mailto: = EMAIL ADDRESS REMOVED = ] On Behalf Of Langum, Michael J
Sent: 08 March 2010 15:06
To: 'WebAIM Discussion List'
Subject: [WebAIM] Screen Reader tests after code validation

Until now, we have based our 508 testing and remediation on careful reviews of HTML code and PDF tags (rather than simply listening to a screen reader rendition of the content). We have assumed that if the content meets standards, and best practices, then it will be usable in JAWS.

But I'm wondering if we should re-think this approach. Maybe a final "test with a screen reader" review would add more value than it would cost in terms of additional time, software, hardware, and training.

I am interested in the group's wisdom regarding:

1. How much added value is there in testing content in JAWS, after it has been evaluated at the code/tag level using automated and manual methods?
2. If we are to add JAWS testing to our program, should we get JAWS Standard version, or JAWS Professional version?
3. Should JAWS evaluations be done for every word of every document (even in larger documents), or is a policy of spot testing randomly selected content adequate?
4. Is the "JAWS for developers" training offered by SSB Bart (or some other vendor I do not know of) worth the cost - compared to self-teaching based on the JAWS "help files?"

I'm also interested in any other "words of wisdom."

-- Mike

From: Andrew Kirkpatrick
Date: Mon, Mar 08 2010 9:48AM
Subject: Re: Screen Reader tests after code validation
← Previous message | Next message →

I'll agree with Leonie that you shouldn't rely exclusively on testing by users who aren't full-time users. I do think that there is value in having developers and testers know the basics however and encouage this - our developers find a lot of issues as they work this way. We also need to continually readjust their expectations to be in line with how users use assistive technologies like JAWS.

4. Is the "JAWS for developers" training offered by SSB Bart (or some other vendor I do not know of) worth the cost - compared to self-teaching based on the JAWS "help files?"

SSB's training is high-quality. It depends on how much time you have to learn - if you work with experienced trainers you will shorten your learning curve and be more effective sooner.

AWK

From: Monir ElRayes
Date: Mon, Mar 08 2010 9:51AM
Subject: Re: Screen Reader tests after code validation
← Previous message | Next message →

Hi Mike,

I think it is best to test based on verifying the logical structure against
the standards (as you are doing) and to then use JAWS as a sanity check,
possibly on a representative sample of the web pages/PDF files you have
remediated.

For PDF, you may also want to consider using the 'Verify and Remediate'
feature in CommonLook which "plays" the file based on its logical structure,
thus simulating the functionality of a screen reader (but is much faster).

Best Regards,

Monir ElRayes
President
NetCentric Technologies

-----Original Message-----
From: = EMAIL ADDRESS REMOVED =
[mailto: = EMAIL ADDRESS REMOVED = ] On Behalf Of Langum, Michael J
Sent: March-08-10 10:06 AM
To: 'WebAIM Discussion List'
Subject: [WebAIM] Screen Reader tests after code validation

Until now, we have based our 508 testing and remediation on careful reviews
of HTML code and PDF tags (rather than simply listening to a screen reader
rendition of the content). We have assumed that if the content meets
standards, and best practices, then it will be usable in JAWS.

But I'm wondering if we should re-think this approach. Maybe a final "test
with a screen reader" review would add more value than it would cost in
terms of additional time, software, hardware, and training.

I am interested in the group's wisdom regarding:

1. How much added value is there in testing content in JAWS, after it has
been evaluated at the code/tag level using automated and manual methods?
2. If we are to add JAWS testing to our program, should we get JAWS
Standard version, or JAWS Professional version?
3. Should JAWS evaluations be done for every word of every document (even
in larger documents), or is a policy of spot testing randomly selected
content adequate?
4. Is the "JAWS for developers" training offered by SSB Bart (or some
other vendor I do not know of) worth the cost - compared to self-teaching
based on the JAWS "help files?"

I'm also interested in any other "words of wisdom."

-- Mike

From: deblist@suberic.net
Date: Mon, Mar 08 2010 10:06AM
Subject: Re: Screen Reader tests after code validation
← Previous message | Next message →

Leonie Watson wrote:

> Jaws is only one of many screen readers on the market.

And screen reader users are only one group of people for whom you
are building accessibility. Screenreader-compliant pages do not
always work for keyboard only and voice only users, for example.

I am a strong, strong proponent of real-life testing. But I
second the opinion of everyone else here, that testing with only
JAWS can lead you to believe that you have solved all your
problems if your pages work with JAWS.

-deborah

From: Karl Groves
Date: Mon, Mar 08 2010 10:09AM
Subject: Re: Screen Reader tests after code validation
← Previous message | Next message →

(top-posting mostly because this is a general response to Mike's message).

Mike,

One of the challenges of doing accessibility testing is that no single
method is sufficiently reliable, accurate, or cost-effective. Any single
approach (i.e. automated, manual code review, manipulating browser/
hardware settings, testing with assistive technologies, etc.) will have
some significant advantages and some significant disadvantages. In a web
context, automated testing is an excellent way to pour through large
volumes of markup quickly. Unfortunately, you can only reliably get 20%
coverage of the overall number of things to look for. On the other hand,
manual code review allows a qualified tester to not only be able to find
problems but also recommend the necessary changes. However, manual code
review can has its limitations as well.

There are similar advantages and disadvantages with testing with assistive
technologies. I believe others have done a good job of discussing what
they are. The best approach is to come up with a test methodology which
takes advantage of each type of testing in a way that gets you highly
accurate, reliable, and repeatable results as quickly as possible.
Perform automated testing for those things which automated testing can
find reliably. Perform manual testing for things automated testing can't
find. Then, if you have qualified staff, do functional performance testing
with assistive technologies.

As for SSB's training, I'll refrain from response on that because I'm
clearly biased on that one, and simply thank Andrew for his kind words.
;-) If you want further details, feel free to give me a shout.






Karl Groves
Director of Strategic Planning and Product Development
SSB BART Group
= EMAIL ADDRESS REMOVED =
703.637.8961 (o)
443.517.9280 (c)
http://www.ssbbartgroup.com

Accessibility-On-Demand

Register Now to Attend Free Accessibility Webinars:
https://www.ssbbartgroup.com/webinars.php


> -----Original Message-----
> From: = EMAIL ADDRESS REMOVED = [mailto:webaim-forum-
> = EMAIL ADDRESS REMOVED = ] On Behalf Of Langum, Michael J
> Sent: Monday, March 08, 2010 10:06 AM
> To: 'WebAIM Discussion List'
> Subject: [WebAIM] Screen Reader tests after code validation
>
> Until now, we have based our 508 testing and remediation on careful
> reviews of HTML code and PDF tags (rather than simply listening to a
> screen reader rendition of the content). We have assumed that if the
> content meets standards, and best practices, then it will be usable in
> JAWS.
>
> But I'm wondering if we should re-think this approach. Maybe a final
> "test with a screen reader" review would add more value than it would
> cost in terms of additional time, software, hardware, and training.
>
> I am interested in the group's wisdom regarding:
>
> 1. How much added value is there in testing content in JAWS, after it
> has been evaluated at the code/tag level using automated and manual
> methods?
> 2. If we are to add JAWS testing to our program, should we get JAWS
> Standard version, or JAWS Professional version?
> 3. Should JAWS evaluations be done for every word of every document
> (even in larger documents), or is a policy of spot testing randomly
> selected content adequate?
> 4. Is the "JAWS for developers" training offered by SSB Bart (or some
> other vendor I do not know of) worth the cost - compared to self-
> teaching based on the JAWS "help files?"
>
> I'm also interested in any other "words of wisdom."
>
> -- Mike
>
>

From: ckrugman@sbcglobal.net
Date: Mon, Mar 08 2010 10:18PM
Subject: Re: Screen Reader tests after code validation
← Previous message | Next message →

It is probably important to note that while JAWS is probably one of the more
widely used screen readers there are others on the market that function
differently and While I use JaAWS exclusively I am told that other screen
readers such as Window-Eyes or NVDA or Screen Access to go react and respond
differently.
Chuck
----- Original Message -----
From: "Langum, Michael J" < = EMAIL ADDRESS REMOVED = >
To: "'WebAIM Discussion List'" < = EMAIL ADDRESS REMOVED = >
Sent: Monday, March 08, 2010 7:06 AM
Subject: [WebAIM] Screen Reader tests after code validation


> Until now, we have based our 508 testing and remediation on careful
> reviews of HTML code and PDF tags (rather than simply listening to a
> screen reader rendition of the content). We have assumed that if the
> content meets standards, and best practices, then it will be usable in
> JAWS.
>
> But I'm wondering if we should re-think this approach. Maybe a final
> "test with a screen reader" review would add more value than it would cost
> in terms of additional time, software, hardware, and training.
>
> I am interested in the group's wisdom regarding:
>
> 1. How much added value is there in testing content in JAWS, after it has
> been evaluated at the code/tag level using automated and manual methods?
> 2. If we are to add JAWS testing to our program, should we get JAWS
> Standard version, or JAWS Professional version?
> 3. Should JAWS evaluations be done for every word of every document (even
> in larger documents), or is a policy of spot testing randomly selected
> content adequate?
> 4. Is the "JAWS for developers" training offered by SSB Bart (or some
> other vendor I do not know of) worth the cost - compared to self-teaching
> based on the JAWS "help files?"
>
> I'm also interested in any other "words of wisdom."
>
> -- Mike
>
>

From: Shawn Henry
Date: Tue, Mar 09 2010 12:45PM
Subject: Re: Screen Reader tests after code validation
← Previous message | Next message →

Léonie Watson wrote:
> 1. How much added value is there in testing content in JAWS, after it has been evaluated at the code/tag level using automated and manual methods?
>
> If you can, building some user testing into your development plan can certainly add value. Following web standards, and conducting accessibility checks will get you a good way towards your goal, but user testing can really take things to another level.

Here is a resource to help with including real users:
Just Ask: Integrating Accessibility Throughout Design
at http://www.uiAccess.com/JustAsk/

(It's a whole book online free thanks to sponsors.)

> 4. Is the "JAWS for developers" training offered by SSB Bart (or some other vendor I do not know of) worth the cost - compared to self-teaching based on the JAWS "help files?"
>
> I would urge caution about conducting this kind of testing yourself. Unless you are a full time screen reader user, it's unlikely you'll be able to simulate the same experience that a full time screen reader user would have. Naturally, this can lead to some erroneous results creeping in.

More about this is under "Screening techniques are not simulations" on http://www.uiaccess.com/accessucd/screening.html

Hope this helps.

Best,
~Shawn



-----------
Shawn Henry
+1-617-395-7664
= EMAIL ADDRESS REMOVED =
www.uiAccess.com/profile.html
-----------------------------

From: Hoffman, Allen
Date: Tue, Mar 09 2010 1:24PM
Subject: Re: Screen Reader tests after code validation
← Previous message | Next message →

> 1. How much added value is there in testing content in JAWS, after it
has
> been evaluated at the code/tag level using automated and manual
methods?
Very very little--and in fact it can be negative. For example, if the
code is right and JAWS (some version), has a bug, and then JAWS (another
version) is fixed, do you change code to make a version of JAWS work
right? Do you then test with another screen reader which may have other
"features"? If you have a solid process to consistently, accurately
test for Section 508 compliance, it is the AT responsibility to provide
the specific type of access.



> 2. If we are to add JAWS testing to our program, should we get JAWS
> Standard version, or JAWS Professional version?
go with the lowest cost you can. If you are not writing scripts you can
probably use the standard version--however, this varies with the
workstation platform you use.



> 3. Should JAWS evaluations be done for every word of every document
(even
> in larger documents), or is a policy of spot testing randomly selected

> content adequate?
Don't use JAWS to read documents at all it won't really resolve your
compliance problems.


> 4. Is the "JAWS for developers" training offered by SSB Bart (or some

> other vendor I do not know of) worth the cost - compared to
self-teaching
> based on the JAWS "help files?"
You only need developer training for JAWS if you plan to write scripts.
Get your developers to understand how the standards apply to their
products, and how to assess their products consistently.



>
> I'm also interested in any other "words of wisdom."

For Section 508 compliance, it is intended that the "standards" be
examined to determine applicable standards from the whole set, and then
utilize those which are applicable. It is not intended to only examine
one category for a product only. This means that Web and software
standards often both apply to content in general, based upon the
combination found.
Clearly understanding how to test interactive and noninteractive content
simultaneously is key to successfully having visibility in to your
Section 508 compliance, and accessibility overall. For example, flash
is so often intermixed with static HTML content anymore that it is
almost ubiquitous. Additionally, don't view "web 2.0" "dynamic"
interactive content differently than other stuff--it's just content with
interactive and noninteractive elements and can be evaluated
accordingly. It does not require a change of standards to assess Web
2.0 content, but does require a solid grasp of "how" to apply the
current standards for Web and software appropriately.



>


-----Original Message-----
From: = EMAIL ADDRESS REMOVED = [mailto: = EMAIL ADDRESS REMOVED = ]
Sent: Monday, March 08, 2010 11:20 PM
To: WebAIM Discussion List
Subject: Re: [WebAIM] Screen Reader tests after code validation

It is probably important to note that while JAWS is probably one of the
more
widely used screen readers there are others on the market that function
differently and While I use JaAWS exclusively I am told that other
screen
readers such as Window-Eyes or NVDA or Screen Access to go react and
respond
differently.
Chuck
----- Original Message -----
From: "Langum, Michael J" < = EMAIL ADDRESS REMOVED = >
To: "'WebAIM Discussion List'" < = EMAIL ADDRESS REMOVED = >
Sent: Monday, March 08, 2010 7:06 AM
Subject: [WebAIM] Screen Reader tests after code validation


> Until now, we have based our 508 testing and remediation on careful
> reviews of HTML code and PDF tags (rather than simply listening to a
> screen reader rendition of the content). We have assumed that if the
> content meets standards, and best practices, then it will be usable in

> JAWS.
>
> But I'm wondering if we should re-think this approach. Maybe a final
> "test with a screen reader" review would add more value than it would
cost
> in terms of additional time, software, hardware, and training.
>
> I am interested in the group's wisdom regarding:
>
> 1. How much added value is there in testing content in JAWS, after it
has
> been evaluated at the code/tag level using automated and manual
methods?
> 2. If we are to add JAWS testing to our program, should we get JAWS
> Standard version, or JAWS Professional version?
> 3. Should JAWS evaluations be done for every word of every document
(even
> in larger documents), or is a policy of spot testing randomly selected

> content adequate?
> 4. Is the "JAWS for developers" training offered by SSB Bart (or some

> other vendor I do not know of) worth the cost - compared to
self-teaching
> based on the JAWS "help files?"
>
> I'm also interested in any other "words of wisdom."
>
> -- Mike
>
>

From: Wayne Dick
Date: Fri, Mar 12 2010 3:54PM
Subject: Re: Screen Reader tests after code validation
← Previous message | No next message

My philosophy is this:

If a page passes a good validation (say WCAG 2.0
Level AA) where manual evaluation as needed is
done then the developer is done. If a screen
reader cannot read it you should probably contact
the manufacturer of the screen reader. You have a
real bug to report. Don't code around it, your
fix may harm another disability group, and the
screen reader maker will not discover their bug.

Wayne