E-mail List Archives
Thread: screen reader versions for testing
Number of posts in this thread: 13 (In chronological order)
From: Delisi, Jennie (MNIT)
Date: Fri, Oct 28 2016 12:21PM
Subject: screen reader versions for testing
No previous message | Next message →
Hello,
Interested in feedback. For those that test websites and documents for accessibility against the standards, but also use screen readers as a part of the testing protocols:
-how many versions do you test with? For example, if there is a version 17 and a version 18 of the same screen reader, are you testing with the current version and 1 version back?
-for those testing websites and documents that will be reviewed by the public, do you have a different number of versions you test with, as opposed to documents that will only be used internally? For example, there may be an expectation of employees having access to the latest version of a particular screen reader (with maybe one version back for a period of time), but the public may have varying amounts of resources to put towards upgrades.
Thanks in advance for any information you can share. I will be cross posting this on the IAAP list and LinkedIn.
Jennie
Jennie Delisi
Accessibility Analyst | Office of Accessibility
Minnesota IT Services | Partners in Performance
658 Cedar Street
St. Paul, MN, 55155
O: 651-201-1135
Information Technology for Minnesota Government|mn.gov/mnit
From: Moore,Michael (Accessibility) (HHSC)
Date: Fri, Oct 28 2016 1:42PM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
Generally we test with the current release of JAWS. This is after we have thoroughly analyzed the code. If we run into unexpected problems then we will test with current release of NVDA, older versions of JAWS, more browsers etc. What we are doing at that point is attempting to determine who to file the defect with, what possible work arounds exist, and whether we can justify changing code that is technically compliant.
Mike Moore
Accessibility Coordinator
Texas Health and Human Services Commission
Civil Rights Office
(512) 438-3431 (Office)
From: Mallory
Date: Fri, Oct 28 2016 3:53PM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
Kinda the same here: test with current (sometimes the tester doesn't
have the absolute most-current either), and things that don't work, we
look them up to see if it's some known bug, if it was fixes, which
versions were affected. In general being up to date for some big bugs so
that if things work with a current version, we can be more aware it may
fail in an older version.
One thing I try to keep in mind is wealthy web developers tend to have
the latest and greatest. But the same usually can't be said for our
customers, so it's dangerous for us to assume "passes in latest" =works for everyone.
cheers,
_mallory
On Fri, Oct 28, 2016, at 09:42 PM, Moore,Michael (Accessibility) (HHSC)
wrote:
> Generally we test with the current release of JAWS. This is after we have
> thoroughly analyzed the code. If we run into unexpected problems then we
> will test with current release of NVDA, older versions of JAWS, more
> browsers etc. What we are doing at that point is attempting to determine
> who to file the defect with, what possible work arounds exist, and
> whether we can justify changing code that is technically compliant.
>
> Mike Moore
> Accessibility Coordinator
> Texas Health and Human Services Commission
> Civil Rights Office
> (512) 438-3431 (Office)
>
>
From: Beranek, Nicholas
Date: Fri, Oct 28 2016 8:45PM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
We test primarily with the latest versions of NVDA and Firefox. We've found that JAWS will compensate for bad coding practices (e.g. A missing programmatic label but adjacent text was present) and there were possibilities that it missed certain issues. If there is ever any question about the results from NVDA; then we'll try another browser. If it persists, we try another screen reader such as JAWS. Sometimes, we'll find that it's simply a user agent issue and we'll do our best to file a bug.
For responsive, we'll test the latest version of iOS with VoiceOver and Safari. Utilizing subject matter expertise: knowledge of the guidelines, nuances between screen readers, browsers, and operating systems, front-end development experience, other tools such as aXe and MSAA Object Inspect, the community, ARIA design patterns, and I could go on further, we're able to qualify that we've done our absolute best.
I hope this helps,
Nick Beranek
Capital One
> On Oct 28, 2016, at 4:54 PM, Mallory < = EMAIL ADDRESS REMOVED = > wrote:
>
> Kinda the same here: test with current (sometimes the tester doesn't
> have the absolute most-current either), and things that don't work, we
> look them up to see if it's some known bug, if it was fixes, which
> versions were affected. In general being up to date for some big bugs so
> that if things work with a current version, we can be more aware it may
> fail in an older version.
>
> One thing I try to keep in mind is wealthy web developers tend to have
> the latest and greatest. But the same usually can't be said for our
> customers, so it's dangerous for us to assume "passes in latest" => works for everyone.
>
> cheers,
> _mallory
>
> On Fri, Oct 28, 2016, at 09:42 PM, Moore,Michael (Accessibility) (HHSC)
> wrote:
>> Generally we test with the current release of JAWS. This is after we have
>> thoroughly analyzed the code. If we run into unexpected problems then we
>> will test with current release of NVDA, older versions of JAWS, more
>> browsers etc. What we are doing at that point is attempting to determine
>> who to file the defect with, what possible work arounds exist, and
>> whether we can justify changing code that is technically compliant.
>>
>> Mike Moore
>> Accessibility Coordinator
>> Texas Health and Human Services Commission
>> Civil Rights Office
>> (512) 438-3431 (Office)
>>
>>
From: JP Jamous
Date: Sat, Oct 29 2016 5:07AM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
Here we test with the latest versions of JAWS/Internet Explorer, NVDA/Firefox and Voiceover/Safari.
It makes it a bit hard to find the happy medium as all 3 screen readers render HTML markup differently. To achieve the happy medium, we try to focus on proper semantic whenever we can. Sometimes that is not possible and we notice that NVDA and Voiceover tend to behave similarly, but JAWS is different since it drills deeper into the markup.
We do test every now and then with older versions of the 3 screen readers in case we run into an issue. As a good example, aria-describedby and aria-labelledby were not supported with Voiceover on iOS 10. We tested our code against iOS 9.4 and found that it worked fine. That was when we realized that it was a bug on behalf of Apple.
From: Birkir R. Gunnarsson
Date: Sun, Oct 30 2016 5:20AM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
We generally test with NVDA (current - 1) with Firefox (current -2).
NVDA is free, open source (so available to the general user at no
cost), has good visual tools to help developers and does not hide
accessibility issues like Jaws does (I appreciate Jaws trying to fill
in the gap for the end users but it makes it a bad tool for testing).
Since I am heavily involved in development and testing of contet, I
sanity check it with Jaws and IE, and we try to file bug and work
around the most critical problems we see occurring in that
combination.
For responsive web, we use iOS, latest (because upgrading is easy),
iPhone 6 in portrait mode (testing in portrait and landscape on phone
and tablet adds a lot of overhead very quickly).
Generally, banks recommend that users upgrade to latest versions of
browsers for security reasons.
We are looking into testing at least key pages with screen
magnification and speech recognition as well.
Of course we focus primarily to make sure our code validates and that
our ARIA, when we use it, is correct.
Cheers
On 10/29/16, JP Jamous < = EMAIL ADDRESS REMOVED = > wrote:
> Here we test with the latest versions of JAWS/Internet Explorer,
> NVDA/Firefox and Voiceover/Safari.
>
> It makes it a bit hard to find the happy medium as all 3 screen readers
> render HTML markup differently. To achieve the happy medium, we try to focus
> on proper semantic whenever we can. Sometimes that is not possible and we
> notice that NVDA and Voiceover tend to behave similarly, but JAWS is
> different since it drills deeper into the markup.
>
> We do test every now and then with older versions of the 3 screen readers in
> case we run into an issue. As a good example, aria-describedby and
> aria-labelledby were not supported with Voiceover on iOS 10. We tested our
> code against iOS 9.4 and found that it worked fine. That was when we
> realized that it was a bug on behalf of Apple.
>
>
From: Kevin Chao
Date: Sun, Oct 30 2016 11:36AM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
I've been doing lots of a11y testing using ChromeVox Next
<http://www.chromevox.com/next.html> and TalkBack
<https://play.google.com/store/apps/details?id=com.google.android.marvin.talkback&hl=en>
with
Chrome. I've found it to be comparable to/better than Mac/iOS VoiceOver. In
the past half year, there have been lots of excellent improvements to
Google's screen readers and browsers, so strongly recommend for these to be
factored in AT test matrix.
On Sun, Oct 30, 2016 at 4:20 AM Birkir R. Gunnarsson <
= EMAIL ADDRESS REMOVED = > wrote:
> We generally test with NVDA (current - 1) with Firefox (current -2).
> NVDA is free, open source (so available to the general user at no
> cost), has good visual tools to help developers and does not hide
> accessibility issues like Jaws does (I appreciate Jaws trying to fill
> in the gap for the end users but it makes it a bad tool for testing).
> Since I am heavily involved in development and testing of contet, I
> sanity check it with Jaws and IE, and we try to file bug and work
> around the most critical problems we see occurring in that
> combination.
> For responsive web, we use iOS, latest (because upgrading is easy),
> iPhone 6 in portrait mode (testing in portrait and landscape on phone
> and tablet adds a lot of overhead very quickly).
> Generally, banks recommend that users upgrade to latest versions of
> browsers for security reasons.
> We are looking into testing at least key pages with screen
> magnification and speech recognition as well.
>
> Of course we focus primarily to make sure our code validates and that
> our ARIA, when we use it, is correct.
> Cheers
>
>
>
>
> On 10/29/16, JP Jamous < = EMAIL ADDRESS REMOVED = > wrote:
> > Here we test with the latest versions of JAWS/Internet Explorer,
> > NVDA/Firefox and Voiceover/Safari.
> >
> > It makes it a bit hard to find the happy medium as all 3 screen readers
> > render HTML markup differently. To achieve the happy medium, we try to
> focus
> > on proper semantic whenever we can. Sometimes that is not possible and we
> > notice that NVDA and Voiceover tend to behave similarly, but JAWS is
> > different since it drills deeper into the markup.
> >
> > We do test every now and then with older versions of the 3 screen
> readers in
> > case we run into an issue. As a good example, aria-describedby and
> > aria-labelledby were not supported with Voiceover on iOS 10. We tested
> our
> > code against iOS 9.4 and found that it worked fine. That was when we
> > realized that it was a bug on behalf of Apple.
> >
> >
From: Birkir R. Gunnarsson
Date: Sun, Oct 30 2016 2:40PM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
The reason I have not encouraged testing with ChromeVox is that it is
very rarely used.
According to the latest WebAiM screen reader user survey:
http://webaim.org/projects/screenreadersurvey6/
it is the primary screen reader for 0.3% of correspondants, while NVDA
is in the 15% range (and much higher if you take secondary screen
readers into acount).
Sadly it is not enough to make sure the webpage code conforms to
standards, it needs to be tested with at least one assistive
technology, usually a screen reader, and that usually means learning
and implementing some workarounds to address the qwerks of that
particular screen reader.
If I am putting an effort into that, I want to make sure to use a
popular screen reader, so those workarounds are noticed.
Of course screen reader usage pattern changes, and we all should keep
a close eye on the WebAIM survey (and other usage statistics if they
become available).
A thumbs up for WebAIM for taking the initiative to carry out this
survey. It is incredibly valuable when recommending and formulating a
corporate accessibility testing strategy, management wants
justification and numbers behind all recommendations.
The Android/Talkback development is exciting and I am keeping a close
prosthetic eye on it, in case it surpasses Voiceover use on responsive
web in the near future, it could maybe do that, seeing as Google is
doing good while the latest Apple upgrades are a bit underwhelming
(well, in my personal opinion that is).
-B
On 10/30/16, Kevin Chao < = EMAIL ADDRESS REMOVED = > wrote:
> I've been doing lots of a11y testing using ChromeVox Next
> <http://www.chromevox.com/next.html> and TalkBack
> <https://play.google.com/store/apps/details?id=com.google.android.marvin.talkback&hl=en>
> with
> Chrome. I've found it to be comparable to/better than Mac/iOS VoiceOver. In
> the past half year, there have been lots of excellent improvements to
> Google's screen readers and browsers, so strongly recommend for these to be
> factored in AT test matrix.
>
> On Sun, Oct 30, 2016 at 4:20 AM Birkir R. Gunnarsson <
> = EMAIL ADDRESS REMOVED = > wrote:
>
>> We generally test with NVDA (current - 1) with Firefox (current -2).
>> NVDA is free, open source (so available to the general user at no
>> cost), has good visual tools to help developers and does not hide
>> accessibility issues like Jaws does (I appreciate Jaws trying to fill
>> in the gap for the end users but it makes it a bad tool for testing).
>> Since I am heavily involved in development and testing of contet, I
>> sanity check it with Jaws and IE, and we try to file bug and work
>> around the most critical problems we see occurring in that
>> combination.
>> For responsive web, we use iOS, latest (because upgrading is easy),
>> iPhone 6 in portrait mode (testing in portrait and landscape on phone
>> and tablet adds a lot of overhead very quickly).
>> Generally, banks recommend that users upgrade to latest versions of
>> browsers for security reasons.
>> We are looking into testing at least key pages with screen
>> magnification and speech recognition as well.
>>
>> Of course we focus primarily to make sure our code validates and that
>> our ARIA, when we use it, is correct.
>> Cheers
>>
>>
>>
>>
>> On 10/29/16, JP Jamous < = EMAIL ADDRESS REMOVED = > wrote:
>> > Here we test with the latest versions of JAWS/Internet Explorer,
>> > NVDA/Firefox and Voiceover/Safari.
>> >
>> > It makes it a bit hard to find the happy medium as all 3 screen readers
>> > render HTML markup differently. To achieve the happy medium, we try to
>> focus
>> > on proper semantic whenever we can. Sometimes that is not possible and
>> > we
>> > notice that NVDA and Voiceover tend to behave similarly, but JAWS is
>> > different since it drills deeper into the markup.
>> >
>> > We do test every now and then with older versions of the 3 screen
>> readers in
>> > case we run into an issue. As a good example, aria-describedby and
>> > aria-labelledby were not supported with Voiceover on iOS 10. We tested
>> our
>> > code against iOS 9.4 and found that it worked fine. That was when we
>> > realized that it was a bug on behalf of Apple.
>> >
>> >
From: Mallory
Date: Wed, Nov 02 2016 3:11PM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
We (my work) may have to start testing ChromeVox because we make
education products and it seems in the US, Chromeboxes are becoming
popular:
http://www.theverge.com/2016/5/19/11711714/chromebooks-outsold-macs-us-idc-figures
On Sun, Oct 30, 2016, at 09:40 PM, Birkir R. Gunnarsson wrote:
> The reason I have not encouraged testing with ChromeVox is that it is
> very rarely used.
> According to the latest WebAiM screen reader user survey:
> http://webaim.org/projects/screenreadersurvey6/
> it is the primary screen reader for 0.3% of correspondants, while NVDA
> is in the 15% range (and much higher if you take secondary screen
> readers into acount).
> Sadly it is not enough to make sure the webpage code conforms to
> standards, it needs to be tested with at least one assistive
> technology, usually a screen reader, and that usually means learning
> and implementing some workarounds to address the qwerks of that
> particular screen reader.
> If I am putting an effort into that, I want to make sure to use a
> popular screen reader, so those workarounds are noticed.
> Of course screen reader usage pattern changes, and we all should keep
> a close eye on the WebAIM survey (and other usage statistics if they
> become available).
> A thumbs up for WebAIM for taking the initiative to carry out this
> survey. It is incredibly valuable when recommending and formulating a
> corporate accessibility testing strategy, management wants
> justification and numbers behind all recommendations.
> The Android/Talkback development is exciting and I am keeping a close
> prosthetic eye on it, in case it surpasses Voiceover use on responsive
> web in the near future, it could maybe do that, seeing as Google is
> doing good while the latest Apple upgrades are a bit underwhelming
> (well, in my personal opinion that is).
> -B
>
>
> On 10/30/16, Kevin Chao < = EMAIL ADDRESS REMOVED = > wrote:
> > I've been doing lots of a11y testing using ChromeVox Next
> > <http://www.chromevox.com/next.html> and TalkBack
> > <https://play.google.com/store/apps/details?id=com.google.android.marvin.talkback&hl=en>
> > with
> > Chrome. I've found it to be comparable to/better than Mac/iOS VoiceOver. In
> > the past half year, there have been lots of excellent improvements to
> > Google's screen readers and browsers, so strongly recommend for these to be
> > factored in AT test matrix.
> >
> > On Sun, Oct 30, 2016 at 4:20 AM Birkir R. Gunnarsson <
> > = EMAIL ADDRESS REMOVED = > wrote:
> >
> >> We generally test with NVDA (current - 1) with Firefox (current -2).
> >> NVDA is free, open source (so available to the general user at no
> >> cost), has good visual tools to help developers and does not hide
> >> accessibility issues like Jaws does (I appreciate Jaws trying to fill
> >> in the gap for the end users but it makes it a bad tool for testing).
> >> Since I am heavily involved in development and testing of contet, I
> >> sanity check it with Jaws and IE, and we try to file bug and work
> >> around the most critical problems we see occurring in that
> >> combination.
> >> For responsive web, we use iOS, latest (because upgrading is easy),
> >> iPhone 6 in portrait mode (testing in portrait and landscape on phone
> >> and tablet adds a lot of overhead very quickly).
> >> Generally, banks recommend that users upgrade to latest versions of
> >> browsers for security reasons.
> >> We are looking into testing at least key pages with screen
> >> magnification and speech recognition as well.
> >>
> >> Of course we focus primarily to make sure our code validates and that
> >> our ARIA, when we use it, is correct.
> >> Cheers
> >>
> >>
> >>
> >>
> >> On 10/29/16, JP Jamous < = EMAIL ADDRESS REMOVED = > wrote:
> >> > Here we test with the latest versions of JAWS/Internet Explorer,
> >> > NVDA/Firefox and Voiceover/Safari.
> >> >
> >> > It makes it a bit hard to find the happy medium as all 3 screen readers
> >> > render HTML markup differently. To achieve the happy medium, we try to
> >> focus
> >> > on proper semantic whenever we can. Sometimes that is not possible and
> >> > we
> >> > notice that NVDA and Voiceover tend to behave similarly, but JAWS is
> >> > different since it drills deeper into the markup.
> >> >
> >> > We do test every now and then with older versions of the 3 screen
> >> readers in
> >> > case we run into an issue. As a good example, aria-describedby and
> >> > aria-labelledby were not supported with Voiceover on iOS 10. We tested
> >> our
> >> > code against iOS 9.4 and found that it worked fine. That was when we
> >> > realized that it was a bug on behalf of Apple.
> >> >
> >> >
From: Mallory
Date: Wed, Nov 02 2016 3:13PM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
One thing to be aware of for the future is, even JAWS might not be able
to
do some of this later on. I know both JAWS and NVDA do some sniffing
around of the DOM and heuristics to make guesses of things, but for
example this isn't allowed any more on Edge (for security reasons
they've said) and I would expect most of the other browser vendors
to do the same, for the same reasons. I'm not sure what SRs and other
AT will do about that, since the reason they do it is poor authoring.
_mallory
On Sat, Oct 29, 2016, at 03:45 AM, Beranek, Nicholas wrote:
> We test primarily with the latest versions of NVDA and Firefox. We've
> found that JAWS will compensate for bad coding practices (e.g. A missing
> programmatic label but adjacent text was present) and there were
> possibilities that it missed certain issues. If there is ever any
> question about the results from NVDA; then we'll try another browser. If
> it persists, we try another screen reader such as JAWS. Sometimes, we'll
> find that it's simply a user agent issue and we'll do our best to file a
> bug.
From: Sean Murphy
Date: Wed, Nov 02 2016 6:37PM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
That is interesting in relation to EDGE. I will have to check out the status of Edge support with Jaws. I believe NVDA provides some support now.
Sean
> On 3 Nov. 2016, at 8:13 am, Mallory < = EMAIL ADDRESS REMOVED = > wrote:
>
> One thing to be aware of for the future is, even JAWS might not be able
> to
> do some of this later on. I know both JAWS and NVDA do some sniffing
> around of the DOM and heuristics to make guesses of things, but for
> example this isn't allowed any more on Edge (for security reasons
> they've said) and I would expect most of the other browser vendors
> to do the same, for the same reasons. I'm not sure what SRs and other
> AT will do about that, since the reason they do it is poor authoring.
>
> _mallory
>
> On Sat, Oct 29, 2016, at 03:45 AM, Beranek, Nicholas wrote:
>> We test primarily with the latest versions of NVDA and Firefox. We've
>> found that JAWS will compensate for bad coding practices (e.g. A missing
>> programmatic label but adjacent text was present) and there were
>> possibilities that it missed certain issues. If there is ever any
>> question about the results from NVDA; then we'll try another browser. If
>> it persists, we try another screen reader such as JAWS. Sometimes, we'll
>> find that it's simply a user agent issue and we'll do our best to file a
>> bug.
> > > >
From: Kevin Chao
Date: Wed, Nov 02 2016 9:49PM
Subject: Re: screen reader versions for testing
← Previous message | Next message →
Edge is supported by NVDA and JAWS 18 will be supported in December
On Wed, Nov 2, 2016 at 5:38 PM Sean Murphy < = EMAIL ADDRESS REMOVED = > wrote:
> That is interesting in relation to EDGE. I will have to check out the
> status of Edge support with Jaws. I believe NVDA provides some support now.
>
> Sean
> > On 3 Nov. 2016, at 8:13 am, Mallory < = EMAIL ADDRESS REMOVED = > wrote:
> >
> > One thing to be aware of for the future is, even JAWS might not be able
> > to
> > do some of this later on. I know both JAWS and NVDA do some sniffing
> > around of the DOM and heuristics to make guesses of things, but for
> > example this isn't allowed any more on Edge (for security reasons
> > they've said) and I would expect most of the other browser vendors
> > to do the same, for the same reasons. I'm not sure what SRs and other
> > AT will do about that, since the reason they do it is poor authoring.
> >
> > _mallory
> >
> > On Sat, Oct 29, 2016, at 03:45 AM, Beranek, Nicholas wrote:
> >> We test primarily with the latest versions of NVDA and Firefox. We've
> >> found that JAWS will compensate for bad coding practices (e.g. A missing
> >> programmatic label but adjacent text was present) and there were
> >> possibilities that it missed certain issues. If there is ever any
> >> question about the results from NVDA; then we'll try another browser. If
> >> it persists, we try another screen reader such as JAWS. Sometimes, we'll
> >> find that it's simply a user agent issue and we'll do our best to file a
> >> bug.
> > > > > > > > >
> > > > >
From: JP Jamous
Date: Thu, Nov 03 2016 2:54AM
Subject: Re: screen reader versions for testing
← Previous message | No next message
NVDA has always supported it since Windows 10 came out. I tested it with NVDA 2015. JAWS is the one lagging behind. Let's hope it is full robust support and not something that will be hair-pulling.