WebAIM - Web Accessibility In Mind

E-mail List Archives

Thread: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

for

Number of posts in this thread: 18 (In chronological order)

From: glen walker
Date: Wed, Jun 03 2020 8:38AM
Subject: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
No previous message | Next message →

Not in functionality or features but in how it interprets the accessibility
tree. For example,

First Name <input>

If a label is not associated with an input element, NVDA will not "guess"
at what the label should be. It won't say anything except "edit". Both
JAWS and VoiceOver (Mac and iOS) will say "First Name" for the label even
though it's not in the accessibility tree.

So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
I've known JAWS has some built in heuristics for fixing bad html but wasn't
sure what VoiceOver had built in. With respect to input labels, JAWS and
VoiceOver seem to work the same. Are there other heuristics that are
similar between the two?

One of the reasons I'm asking is because a customer wants to do all their
testing on the Mac. I was trying to convince them that there might be bugs
that are missed because VoiceOver is trying to be nice but I wasn't sure
how many things VO is nice about. Is there a list of heuristics that both
JAWS and VoiceOver have to overcome bad html?

From: Steve Green
Date: Wed, Jun 03 2020 8:54AM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

Since the heuristics are a significant piece of intellectual property for an AT vendor, I would be very surprised if any vendor published theirs. That said, I too would be very interested if there are such lists.

I would also make the point (again) that there is a difference between doing a WCAG audit and an accessibility audit. It is not necessary to use any assistive technologies when doing a WCAG audit, and arguably you should not use them. A WCAG audit should be done by inspection of the code and user interface, using tools where they are helpful. The behaviour of assistive technologies is irrelevant and unhelpful.

By contrast, an accessibility audit can be anything you want it to be, and you may well choose to include testing the user experience with one or more screen readers. The choice of operating system, browser and AT should be determined by factors such as your audience and contractual obligations. I would go so far as to say it's unprofessional to test with a particular platform simply because it's what you've got or what you want to use.

Steve Green
Managing Director
Test Partners Ltd


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of glen walker
Sent: 03 June 2020 15:39
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Not in functionality or features but in how it interprets the accessibility tree. For example,

First Name <input>

If a label is not associated with an input element, NVDA will not "guess"
at what the label should be. It won't say anything except "edit". Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the label even though it's not in the accessibility tree.

So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
I've known JAWS has some built in heuristics for fixing bad html but wasn't sure what VoiceOver had built in. With respect to input labels, JAWS and VoiceOver seem to work the same. Are there other heuristics that are similar between the two?

One of the reasons I'm asking is because a customer wants to do all their testing on the Mac. I was trying to convince them that there might be bugs that are missed because VoiceOver is trying to be nice but I wasn't sure how many things VO is nice about. Is there a list of heuristics that both JAWS and VoiceOver have to overcome bad html?

From: glen walker
Date: Wed, Jun 03 2020 9:50AM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

Yeah, I figured the heuristics might be IP so was looking for more
anecdotal info. Anything people might have observed when comparing screen
readers. The missing label is the most common one I'm aware of.

But I don't totally agree with your WCAG audit definition but that might be
more about terminology.

> A WCAG audit should be done by inspection of the code and user interface, *using
tools *where they are helpful.

I consider a screen reader a *tool* used to test for WCAG conformance. It
helps me find bugs. I'm not using the screen reader to emulate a user
experience. I treat a screen reader like I do a color contrast analyzer or
html validator or a page scanning tool or a bookmarklet. It's just one of
many tools in my toolbox.



On Wed, Jun 3, 2020 at 8:54 AM Steve Green < = EMAIL ADDRESS REMOVED = >
wrote:

> Since the heuristics are a significant piece of intellectual property for
> an AT vendor, I would be very surprised if any vendor published theirs.
> That said, I too would be very interested if there are such lists.
>
> I would also make the point (again) that there is a difference between
> doing a WCAG audit and an accessibility audit. It is not necessary to use
> any assistive technologies when doing a WCAG audit, and arguably you should
> not use them. A WCAG audit should be done by inspection of the code and
> user interface, using tools where they are helpful. The behaviour of
> assistive technologies is irrelevant and unhelpful.
>
> By contrast, an accessibility audit can be anything you want it to be, and
> you may well choose to include testing the user experience with one or more
> screen readers. The choice of operating system, browser and AT should be
> determined by factors such as your audience and contractual obligations. I
> would go so far as to say it's unprofessional to test with a particular
> platform simply because it's what you've got or what you want to use.
>
> Steve Green
> Managing Director
> Test Partners Ltd
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 15:39
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect
> to the accessibility tree?
>
> Not in functionality or features but in how it interprets the
> accessibility tree. For example,
>
> First Name <input>
>
> If a label is not associated with an input element, NVDA will not "guess"
> at what the label should be. It won't say anything except "edit". Both
> JAWS and VoiceOver (Mac and iOS) will say "First Name" for the label even
> though it's not in the accessibility tree.
>
> So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> I've known JAWS has some built in heuristics for fixing bad html but
> wasn't sure what VoiceOver had built in. With respect to input labels,
> JAWS and VoiceOver seem to work the same. Are there other heuristics that
> are similar between the two?
>
> One of the reasons I'm asking is because a customer wants to do all their
> testing on the Mac. I was trying to convince them that there might be bugs
> that are missed because VoiceOver is trying to be nice but I wasn't sure
> how many things VO is nice about. Is there a list of heuristics that both
> JAWS and VoiceOver have to overcome bad html?
> > > at http://webaim.org/discussion/archives
> > > > > >

From: Steve Green
Date: Wed, Jun 03 2020 12:23PM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox.

Perhaps I am more sensitive to this than most people because I have seen so many so-called accessibility consultants do really bad WCAG audits because they used screen readers instead of all the other tools that are available to us, which give more accurate results. Worse still, they invariably report the WCAG non-conformances in terms of the screen reader behaviour instead of the coding (probably because they don't understand the code, whereas the screen reader behaviour is easy to describe).

Another heuristic is that JAWS has various ways of deciding if a <table> element is a layout table, in which case it does not announce the presence of the table, the table navigation shortcuts don't work and the contents of the <td> elements are concatenated as if they were <span> elements. One heuristic is if any cell is larger than a certain size. Another is if the table only comprises a single row of <td> elements.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of glen walker
Sent: 03 June 2020 16:50
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Yeah, I figured the heuristics might be IP so was looking for more anecdotal info. Anything people might have observed when comparing screen readers. The missing label is the most common one I'm aware of.

But I don't totally agree with your WCAG audit definition but that might be more about terminology.

> A WCAG audit should be done by inspection of the code and user
> interface, *using
tools *where they are helpful.

I consider a screen reader a *tool* used to test for WCAG conformance. It helps me find bugs. I'm not using the screen reader to emulate a user experience. I treat a screen reader like I do a color contrast analyzer or html validator or a page scanning tool or a bookmarklet. It's just one of many tools in my toolbox.



On Wed, Jun 3, 2020 at 8:54 AM Steve Green < = EMAIL ADDRESS REMOVED = >
wrote:

> Since the heuristics are a significant piece of intellectual property
> for an AT vendor, I would be very surprised if any vendor published theirs.
> That said, I too would be very interested if there are such lists.
>
> I would also make the point (again) that there is a difference between
> doing a WCAG audit and an accessibility audit. It is not necessary to
> use any assistive technologies when doing a WCAG audit, and arguably
> you should not use them. A WCAG audit should be done by inspection of
> the code and user interface, using tools where they are helpful. The
> behaviour of assistive technologies is irrelevant and unhelpful.
>
> By contrast, an accessibility audit can be anything you want it to be,
> and you may well choose to include testing the user experience with
> one or more screen readers. The choice of operating system, browser
> and AT should be determined by factors such as your audience and
> contractual obligations. I would go so far as to say it's
> unprofessional to test with a particular platform simply because it's what you've got or what you want to use.
>
> Steve Green
> Managing Director
> Test Partners Ltd
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 15:39
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Not in functionality or features but in how it interprets the
> accessibility tree. For example,
>
> First Name <input>
>
> If a label is not associated with an input element, NVDA will not "guess"
> at what the label should be. It won't say anything except "edit".
> Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> label even though it's not in the accessibility tree.
>
> So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> I've known JAWS has some built in heuristics for fixing bad html but
> wasn't sure what VoiceOver had built in. With respect to input
> labels, JAWS and VoiceOver seem to work the same. Are there other
> heuristics that are similar between the two?
>
> One of the reasons I'm asking is because a customer wants to do all
> their testing on the Mac. I was trying to convince them that there
> might be bugs that are missed because VoiceOver is trying to be nice
> but I wasn't sure how many things VO is nice about. Is there a list
> of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> >

From: glen walker
Date: Wed, Jun 03 2020 2:52PM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

Oh yeah, tables. I forgot about that heuristic. It's been tweaked over
the years. It's used to be bad at detecting layout tables but has gotten
better. Thanks for reminding me.

On Wed, Jun 3, 2020 at 12:23 PM Steve Green < = EMAIL ADDRESS REMOVED = >
wrote:

>
> Another heuristic is that JAWS has various ways of deciding if a <table>
> element is a layout table, in which case it does not announce the presence
> of the table, the table navigation shortcuts don't work and the contents of
> the <td> elements are concatenated as if they were <span> elements. One
> heuristic is if any cell is larger than a certain size. Another is if the
> table only comprises a single row of <td> elements.
>
> Steve
>
>
>

From: Murphy, Sean
Date: Wed, Jun 03 2020 3:51PM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

Steve,

What tools are you referring to in the below statement?

"By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox."

From your testing, has the heuristic improved over time or do you still feel is the same state? I noticed you isolated Jaws, does this also occur in other screen readers?


Sean
In relation to





Sean Murphy | Accessibility expert/lead
Digital Accessibility manager
Telstra Digital Channels | Digital Systems
Mobile: 0405 129 739 | Desk: (02) 9866-7917

www.telstra.com

This email may contain confidential information.
If I've sent it to you by accident, please delete it immediately



-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
Sent: Thursday, 4 June 2020 4:23 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

[External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.

By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox.

Perhaps I am more sensitive to this than most people because I have seen so many so-called accessibility consultants do really bad WCAG audits because they used screen readers instead of all the other tools that are available to us, which give more accurate results. Worse still, they invariably report the WCAG non-conformances in terms of the screen reader behaviour instead of the coding (probably because they don't understand the code, whereas the screen reader behaviour is easy to describe).

Another heuristic is that JAWS has various ways of deciding if a <table> element is a layout table, in which case it does not announce the presence of the table, the table navigation shortcuts don't work and the contents of the <td> elements are concatenated as if they were <span> elements. One heuristic is if any cell is larger than a certain size. Another is if the table only comprises a single row of <td> elements.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of glen walker
Sent: 03 June 2020 16:50
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Yeah, I figured the heuristics might be IP so was looking for more anecdotal info. Anything people might have observed when comparing screen readers. The missing label is the most common one I'm aware of.

But I don't totally agree with your WCAG audit definition but that might be more about terminology.

> A WCAG audit should be done by inspection of the code and user
> interface, *using
tools *where they are helpful.

I consider a screen reader a *tool* used to test for WCAG conformance. It helps me find bugs. I'm not using the screen reader to emulate a user experience. I treat a screen reader like I do a color contrast analyzer or html validator or a page scanning tool or a bookmarklet. It's just one of many tools in my toolbox.



On Wed, Jun 3, 2020 at 8:54 AM Steve Green < = EMAIL ADDRESS REMOVED = >
wrote:

> Since the heuristics are a significant piece of intellectual property
> for an AT vendor, I would be very surprised if any vendor published theirs.
> That said, I too would be very interested if there are such lists.
>
> I would also make the point (again) that there is a difference between
> doing a WCAG audit and an accessibility audit. It is not necessary to
> use any assistive technologies when doing a WCAG audit, and arguably
> you should not use them. A WCAG audit should be done by inspection of
> the code and user interface, using tools where they are helpful. The
> behaviour of assistive technologies is irrelevant and unhelpful.
>
> By contrast, an accessibility audit can be anything you want it to be,
> and you may well choose to include testing the user experience with
> one or more screen readers. The choice of operating system, browser
> and AT should be determined by factors such as your audience and
> contractual obligations. I would go so far as to say it's
> unprofessional to test with a particular platform simply because it's what you've got or what you want to use.
>
> Steve Green
> Managing Director
> Test Partners Ltd
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 15:39
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Not in functionality or features but in how it interprets the
> accessibility tree. For example,
>
> First Name <input>
>
> If a label is not associated with an input element, NVDA will not "guess"
> at what the label should be. It won't say anything except "edit".
> Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> label even though it's not in the accessibility tree.
>
> So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> I've known JAWS has some built in heuristics for fixing bad html but
> wasn't sure what VoiceOver had built in. With respect to input
> labels, JAWS and VoiceOver seem to work the same. Are there other
> heuristics that are similar between the two?
>
> One of the reasons I'm asking is because a customer wants to do all
> their testing on the Mac. I was trying to convince them that there
> might be bugs that are missed because VoiceOver is trying to be nice
> but I wasn't sure how many things VO is nice about. Is there a list
> of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> >

From: Jonathan C. Cohn
Date: Wed, Jun 03 2020 4:44PM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

Banother issue with Safari is the flattening of lists with no decorations. For example a list of links in HTML might show up in VoiceOver as not being a list.


Sent from my iPhone

> On Jun 3, 2020, at 5:51 PM, Murphy, Sean < = EMAIL ADDRESS REMOVED = > wrote:
>
> Steve,
>
> What tools are you referring to in the below statement?
>
> "By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox."
>
> From your testing, has the heuristic improved over time or do you still feel is the same state? I noticed you isolated Jaws, does this also occur in other screen readers?
>
>
> Sean
> In relation to
>
>
>
>
>
> Sean Murphy | Accessibility expert/lead
> Digital Accessibility manager
> Telstra Digital Channels | Digital Systems
> Mobile: 0405 129 739 | Desk: (02) 9866-7917
>
> www.telstra.com
>
> This email may contain confidential information.
> If I've sent it to you by accident, please delete it immediately
>
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
> Sent: Thursday, 4 June 2020 4:23 AM
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
>
> [External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.
>
> By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox.
>
> Perhaps I am more sensitive to this than most people because I have seen so many so-called accessibility consultants do really bad WCAG audits because they used screen readers instead of all the other tools that are available to us, which give more accurate results. Worse still, they invariably report the WCAG non-conformances in terms of the screen reader behaviour instead of the coding (probably because they don't understand the code, whereas the screen reader behaviour is easy to describe).
>
> Another heuristic is that JAWS has various ways of deciding if a <table> element is a layout table, in which case it does not announce the presence of the table, the table navigation shortcuts don't work and the contents of the <td> elements are concatenated as if they were <span> elements. One heuristic is if any cell is larger than a certain size. Another is if the table only comprises a single row of <td> elements.
>
> Steve
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of glen walker
> Sent: 03 June 2020 16:50
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
>
> Yeah, I figured the heuristics might be IP so was looking for more anecdotal info. Anything people might have observed when comparing screen readers. The missing label is the most common one I'm aware of.
>
> But I don't totally agree with your WCAG audit definition but that might be more about terminology.
>
>> A WCAG audit should be done by inspection of the code and user
>> interface, *using
> tools *where they are helpful.
>
> I consider a screen reader a *tool* used to test for WCAG conformance. It helps me find bugs. I'm not using the screen reader to emulate a user experience. I treat a screen reader like I do a color contrast analyzer or html validator or a page scanning tool or a bookmarklet. It's just one of many tools in my toolbox.
>
>
>
>> On Wed, Jun 3, 2020 at 8:54 AM Steve Green < = EMAIL ADDRESS REMOVED = >
>> wrote:
>>
>> Since the heuristics are a significant piece of intellectual property
>> for an AT vendor, I would be very surprised if any vendor published theirs.
>> That said, I too would be very interested if there are such lists.
>>
>> I would also make the point (again) that there is a difference between
>> doing a WCAG audit and an accessibility audit. It is not necessary to
>> use any assistive technologies when doing a WCAG audit, and arguably
>> you should not use them. A WCAG audit should be done by inspection of
>> the code and user interface, using tools where they are helpful. The
>> behaviour of assistive technologies is irrelevant and unhelpful.
>>
>> By contrast, an accessibility audit can be anything you want it to be,
>> and you may well choose to include testing the user experience with
>> one or more screen readers. The choice of operating system, browser
>> and AT should be determined by factors such as your audience and
>> contractual obligations. I would go so far as to say it's
>> unprofessional to test with a particular platform simply because it's what you've got or what you want to use.
>>
>> Steve Green
>> Managing Director
>> Test Partners Ltd
>>
>>
>> -----Original Message-----
>> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
>> glen walker
>> Sent: 03 June 2020 15:39
>> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
>> Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
>> respect to the accessibility tree?
>>
>> Not in functionality or features but in how it interprets the
>> accessibility tree. For example,
>>
>> First Name <input>
>>
>> If a label is not associated with an input element, NVDA will not "guess"
>> at what the label should be. It won't say anything except "edit".
>> Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
>> label even though it's not in the accessibility tree.
>>
>> So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
>> I've known JAWS has some built in heuristics for fixing bad html but
>> wasn't sure what VoiceOver had built in. With respect to input
>> labels, JAWS and VoiceOver seem to work the same. Are there other
>> heuristics that are similar between the two?
>>
>> One of the reasons I'm asking is because a customer wants to do all
>> their testing on the Mac. I was trying to convince them that there
>> might be bugs that are missed because VoiceOver is trying to be nice
>> but I wasn't sure how many things VO is nice about. Is there a list
>> of heuristics that both JAWS and VoiceOver have to overcome bad html?
>> >> >> archives at http://webaim.org/discussion/archives
>> >> >> >> archives at http://webaim.org/discussion/archives
>> >>
> > > > > > >

From: Murphy, Sean
Date: Wed, Jun 03 2020 4:53PM
Subject: Re: Is Voiceover more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

Glenn,

What is in the browser accessibility tree is what should be used as the deal breaker. If the name property of the accessibility tree is null, in my mind that is a failure regardless of the screen reader from a WCAG point of view. As aria-label and similar attributes should be populating this property for the assistive technology. This applies to all the different accessibility tree properties of importance such as role, value, etc. The accessibility tree will populate the accessibility API of the OS. In relation to other components like tables, I would still look at the accessibility tree first to make sure it is correctly populated.

Note: The approach which different screen readers handle the information from the accessibility API / Accessibility Tree does vary due to customer demand over the years or their core principles. NVDA is very much following the standards as closely as possible. While Jaws and Voiceover are more interested in the user experiences. Jaws did include a lot of extra support in IE, not sure if this is still the case for Firefox or Chrome. This is my observation of being a screen reader user for the last 30 plus years and working for Freedom scientific. The standard approach for NVDA came from one of the founders when I was speaking to them at a A11y Camp conference in Melbourne.

Sean




Sean Murphy | Digital System specialist (Accessibility)
Telstra Digital Channels | Digital Systems
Mobile: 0405 129 739 | Desk: (02) 9866-7917
Digital Systems Launch Page
Accessibility Single source of Truth

-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of glen walker
Sent: Thursday, 4 June 2020 12:39 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

[External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.

Not in functionality or features but in how it interprets the accessibility tree. For example,

First Name <input>

If a label is not associated with an input element, NVDA will not "guess"
at what the label should be. It won't say anything except "edit". Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the label even though it's not in the accessibility tree.

So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
I've known JAWS has some built in heuristics for fixing bad html but wasn't sure what VoiceOver had built in. With respect to input labels, JAWS and VoiceOver seem to work the same. Are there other heuristics that are similar between the two?

One of the reasons I'm asking is because a customer wants to do all their testing on the Mac. I was trying to convince them that there might be bugs that are missed because VoiceOver is trying to be nice but I wasn't sure how many things VO is nice about. Is there a list of heuristics that both JAWS and VoiceOver have to overcome bad html?

From: glen walker
Date: Wed, Jun 03 2020 5:13PM
Subject: Re: Is Voiceover more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

Yes, Sean, of course the accessibility tree wins. My point was that in
using a screen reader as a testing tool, it's much faster for me to tab
through a dozen form elements and hear "blank" for all of them than it is
for me to inspect the accessibility tree for those same dozen elements and
see if an accessible name exists. But if you use a tool that tries to
compensate for a bad accessibility tree, then that tool is not as useful.


On Wed, Jun 3, 2020 at 4:53 PM Murphy, Sean <
= EMAIL ADDRESS REMOVED = > wrote:

>
> What is in the browser accessibility tree is what should be used as the
> deal breaker. If the name property of the accessibility tree is null, in my
> mind that is a failure regardless of the screen reader from a WCAG point of
> view. As aria-label and similar attributes should be populating this
> property for the assistive technology. This applies to all the different
> accessibility tree properties of importance such as role, value, etc. The
> accessibility tree will populate the accessibility API of the OS. In
> relation to other components like tables, I would still look at the
> accessibility tree first to make sure it is correctly populated.
>
> Sean
>

From: glen walker
Date: Wed, Jun 03 2020 5:29PM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

Jonathan, while the list is not a list problem with Safari is certainly a
problem (in my opinion), it's not quite what I was asking about. In fact,
it's kind of the opposite. I was asking about incorrect HTML being
interpreted as correct by the screen reader. In the Safari case, it's the
browser that is not setting the role to list for lists that don't have
decorations. If you bring up the list in Firefox or Chrome on the Mac, it
reads just fine.

On Wed, Jun 3, 2020 at 4:44 PM Jonathan C. Cohn < = EMAIL ADDRESS REMOVED = >
wrote:

> Banother issue with Safari is the flattening of lists with no decorations.
> For example a list of links in HTML might show up in VoiceOver as not being
> a list.
>
>

From: Steve Green
Date: Thu, Jun 04 2020 3:31AM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

Pretty much every week I find ways in which all sorts of tools provide incorrect or misleading results. I disseminate this knowledge among our team and I will soon turn these emails into blogs on our website.

One example is that the search feature in browser developer tools sometimes reports no matches for a search string even though there are matches. We use lots of bookmarklets that help to test a single WCAG success criterion, but sometimes they give the wrong result. We are increasingly using SortSite to do automated testing after a manual WCAG audit, and I find bugs in SortSite every time I use it. To their credit, SortSite acknowledge and fix the bugs, but there always seem to be more.

I can't really comment on whether JAWS' heuristics are improving because our test process is designed such that we avoid them. The typical sequence of events is that we do the WCAG audit by code inspection, then the client fixes all the issues, then we do a screen reader review. At that point the screen reader does not need to rely on any heuristics because everything has been fixed. The issues identified in the screen reader review are therefore mostly screen reader bugs, cognitive issues and intentional behaviours that are a poor user experience. The latter typically includes any feature that uses application mode.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Murphy, Sean
Sent: 03 June 2020 22:51
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Steve,

What tools are you referring to in the below statement?

"By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox."

From your testing, has the heuristic improved over time or do you still feel is the same state? I noticed you isolated Jaws, does this also occur in other screen readers?


Sean
In relation to





Sean Murphy | Accessibility expert/lead
Digital Accessibility manager
Telstra Digital Channels | Digital Systems
Mobile: 0405 129 739 | Desk: (02) 9866-7917

www.telstra.com

This email may contain confidential information.
If I've sent it to you by accident, please delete it immediately



-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
Sent: Thursday, 4 June 2020 4:23 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

[External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.

By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox.

Perhaps I am more sensitive to this than most people because I have seen so many so-called accessibility consultants do really bad WCAG audits because they used screen readers instead of all the other tools that are available to us, which give more accurate results. Worse still, they invariably report the WCAG non-conformances in terms of the screen reader behaviour instead of the coding (probably because they don't understand the code, whereas the screen reader behaviour is easy to describe).

Another heuristic is that JAWS has various ways of deciding if a <table> element is a layout table, in which case it does not announce the presence of the table, the table navigation shortcuts don't work and the contents of the <td> elements are concatenated as if they were <span> elements. One heuristic is if any cell is larger than a certain size. Another is if the table only comprises a single row of <td> elements.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of glen walker
Sent: 03 June 2020 16:50
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Yeah, I figured the heuristics might be IP so was looking for more anecdotal info. Anything people might have observed when comparing screen readers. The missing label is the most common one I'm aware of.

But I don't totally agree with your WCAG audit definition but that might be more about terminology.

> A WCAG audit should be done by inspection of the code and user
> interface, *using
tools *where they are helpful.

I consider a screen reader a *tool* used to test for WCAG conformance. It helps me find bugs. I'm not using the screen reader to emulate a user experience. I treat a screen reader like I do a color contrast analyzer or html validator or a page scanning tool or a bookmarklet. It's just one of many tools in my toolbox.



On Wed, Jun 3, 2020 at 8:54 AM Steve Green < = EMAIL ADDRESS REMOVED = >
wrote:

> Since the heuristics are a significant piece of intellectual property
> for an AT vendor, I would be very surprised if any vendor published theirs.
> That said, I too would be very interested if there are such lists.
>
> I would also make the point (again) that there is a difference between
> doing a WCAG audit and an accessibility audit. It is not necessary to
> use any assistive technologies when doing a WCAG audit, and arguably
> you should not use them. A WCAG audit should be done by inspection of
> the code and user interface, using tools where they are helpful. The
> behaviour of assistive technologies is irrelevant and unhelpful.
>
> By contrast, an accessibility audit can be anything you want it to be,
> and you may well choose to include testing the user experience with
> one or more screen readers. The choice of operating system, browser
> and AT should be determined by factors such as your audience and
> contractual obligations. I would go so far as to say it's
> unprofessional to test with a particular platform simply because it's what you've got or what you want to use.
>
> Steve Green
> Managing Director
> Test Partners Ltd
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 15:39
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Not in functionality or features but in how it interprets the
> accessibility tree. For example,
>
> First Name <input>
>
> If a label is not associated with an input element, NVDA will not "guess"
> at what the label should be. It won't say anything except "edit".
> Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> label even though it's not in the accessibility tree.
>
> So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> I've known JAWS has some built in heuristics for fixing bad html but
> wasn't sure what VoiceOver had built in. With respect to input
> labels, JAWS and VoiceOver seem to work the same. Are there other
> heuristics that are similar between the two?
>
> One of the reasons I'm asking is because a customer wants to do all
> their testing on the Mac. I was trying to convince them that there
> might be bugs that are missed because VoiceOver is trying to be nice
> but I wasn't sure how many things VO is nice about. Is there a list
> of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> >

From: Murphy, Sean
Date: Thu, Jun 04 2020 3:38AM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

Steve,

Thanks for this. blogs you are outlining would be valuable to the community.

Sean




Sean Murphy | Digital System specialist (Accessibility)
Telstra Digital Channels | Digital Systems
Mobile: 0405 129 739 | Desk: (02) 9866-7917
Digital Systems Launch Page
Accessibility Single source of Truth

-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
Sent: Thursday, 4 June 2020 7:32 PM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

[External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.

Pretty much every week I find ways in which all sorts of tools provide incorrect or misleading results. I disseminate this knowledge among our team and I will soon turn these emails into blogs on our website.

One example is that the search feature in browser developer tools sometimes reports no matches for a search string even though there are matches. We use lots of bookmarklets that help to test a single WCAG success criterion, but sometimes they give the wrong result. We are increasingly using SortSite to do automated testing after a manual WCAG audit, and I find bugs in SortSite every time I use it. To their credit, SortSite acknowledge and fix the bugs, but there always seem to be more.

I can't really comment on whether JAWS' heuristics are improving because our test process is designed such that we avoid them. The typical sequence of events is that we do the WCAG audit by code inspection, then the client fixes all the issues, then we do a screen reader review. At that point the screen reader does not need to rely on any heuristics because everything has been fixed. The issues identified in the screen reader review are therefore mostly screen reader bugs, cognitive issues and intentional behaviours that are a poor user experience. The latter typically includes any feature that uses application mode.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Murphy, Sean
Sent: 03 June 2020 22:51
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Steve,

What tools are you referring to in the below statement?

"By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox."

From your testing, has the heuristic improved over time or do you still feel is the same state? I noticed you isolated Jaws, does this also occur in other screen readers?


Sean
In relation to





Sean Murphy | Accessibility expert/lead
Digital Accessibility manager
Telstra Digital Channels | Digital Systems
Mobile: 0405 129 739 | Desk: (02) 9866-7917

www.telstra.com

This email may contain confidential information.
If I've sent it to you by accident, please delete it immediately



-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
Sent: Thursday, 4 June 2020 4:23 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

[External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.

By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox.

Perhaps I am more sensitive to this than most people because I have seen so many so-called accessibility consultants do really bad WCAG audits because they used screen readers instead of all the other tools that are available to us, which give more accurate results. Worse still, they invariably report the WCAG non-conformances in terms of the screen reader behaviour instead of the coding (probably because they don't understand the code, whereas the screen reader behaviour is easy to describe).

Another heuristic is that JAWS has various ways of deciding if a <table> element is a layout table, in which case it does not announce the presence of the table, the table navigation shortcuts don't work and the contents of the <td> elements are concatenated as if they were <span> elements. One heuristic is if any cell is larger than a certain size. Another is if the table only comprises a single row of <td> elements.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of glen walker
Sent: 03 June 2020 16:50
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Yeah, I figured the heuristics might be IP so was looking for more anecdotal info. Anything people might have observed when comparing screen readers. The missing label is the most common one I'm aware of.

But I don't totally agree with your WCAG audit definition but that might be more about terminology.

> A WCAG audit should be done by inspection of the code and user
> interface, *using
tools *where they are helpful.

I consider a screen reader a *tool* used to test for WCAG conformance. It helps me find bugs. I'm not using the screen reader to emulate a user experience. I treat a screen reader like I do a color contrast analyzer or html validator or a page scanning tool or a bookmarklet. It's just one of many tools in my toolbox.



On Wed, Jun 3, 2020 at 8:54 AM Steve Green < = EMAIL ADDRESS REMOVED = >
wrote:

> Since the heuristics are a significant piece of intellectual property
> for an AT vendor, I would be very surprised if any vendor published theirs.
> That said, I too would be very interested if there are such lists.
>
> I would also make the point (again) that there is a difference between
> doing a WCAG audit and an accessibility audit. It is not necessary to
> use any assistive technologies when doing a WCAG audit, and arguably
> you should not use them. A WCAG audit should be done by inspection of
> the code and user interface, using tools where they are helpful. The
> behaviour of assistive technologies is irrelevant and unhelpful.
>
> By contrast, an accessibility audit can be anything you want it to be,
> and you may well choose to include testing the user experience with
> one or more screen readers. The choice of operating system, browser
> and AT should be determined by factors such as your audience and
> contractual obligations. I would go so far as to say it's
> unprofessional to test with a particular platform simply because it's what you've got or what you want to use.
>
> Steve Green
> Managing Director
> Test Partners Ltd
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 15:39
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Not in functionality or features but in how it interprets the
> accessibility tree. For example,
>
> First Name <input>
>
> If a label is not associated with an input element, NVDA will not "guess"
> at what the label should be. It won't say anything except "edit".
> Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> label even though it's not in the accessibility tree.
>
> So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> I've known JAWS has some built in heuristics for fixing bad html but
> wasn't sure what VoiceOver had built in. With respect to input
> labels, JAWS and VoiceOver seem to work the same. Are there other
> heuristics that are similar between the two?
>
> One of the reasons I'm asking is because a customer wants to do all
> their testing on the Mac. I was trying to convince them that there
> might be bugs that are missed because VoiceOver is trying to be nice
> but I wasn't sure how many things VO is nice about. Is there a list
> of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> >

From: Jonathan Avila
Date: Thu, Jun 04 2020 7:33AM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

> One example is that the search feature in browser developer tools sometimes reports no matches for a search string even though there are matches.

I've run into this -- this makes me feel much better to hear others have run into this as it's impacted me in time sensitive situations. I've definitely been able to reproduce this in Chrome in the past.

Jonathan

-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
Sent: Thursday, June 4, 2020 5:32 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


Pretty much every week I find ways in which all sorts of tools provide incorrect or misleading results. I disseminate this knowledge among our team and I will soon turn these emails into blogs on our website.

One example is that the search feature in browser developer tools sometimes reports no matches for a search string even though there are matches. We use lots of bookmarklets that help to test a single WCAG success criterion, but sometimes they give the wrong result. We are increasingly using SortSite to do automated testing after a manual WCAG audit, and I find bugs in SortSite every time I use it. To their credit, SortSite acknowledge and fix the bugs, but there always seem to be more.

I can't really comment on whether JAWS' heuristics are improving because our test process is designed such that we avoid them. The typical sequence of events is that we do the WCAG audit by code inspection, then the client fixes all the issues, then we do a screen reader review. At that point the screen reader does not need to rely on any heuristics because everything has been fixed. The issues identified in the screen reader review are therefore mostly screen reader bugs, cognitive issues and intentional behaviours that are a poor user experience. The latter typically includes any feature that uses application mode.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Murphy, Sean
Sent: 03 June 2020 22:51
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Steve,

What tools are you referring to in the below statement?

"By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox."

From your testing, has the heuristic improved over time or do you still feel is the same state? I noticed you isolated Jaws, does this also occur in other screen readers?


Sean
In relation to





Sean Murphy | Accessibility expert/lead
Digital Accessibility manager
Telstra Digital Channels | Digital Systems
Mobile: 0405 129 739 | Desk: (02) 9866-7917

www.telstra.com

This email may contain confidential information.
If I've sent it to you by accident, please delete it immediately



-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
Sent: Thursday, 4 June 2020 4:23 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

[External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.

By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox.

Perhaps I am more sensitive to this than most people because I have seen so many so-called accessibility consultants do really bad WCAG audits because they used screen readers instead of all the other tools that are available to us, which give more accurate results. Worse still, they invariably report the WCAG non-conformances in terms of the screen reader behaviour instead of the coding (probably because they don't understand the code, whereas the screen reader behaviour is easy to describe).

Another heuristic is that JAWS has various ways of deciding if a <table> element is a layout table, in which case it does not announce the presence of the table, the table navigation shortcuts don't work and the contents of the <td> elements are concatenated as if they were <span> elements. One heuristic is if any cell is larger than a certain size. Another is if the table only comprises a single row of <td> elements.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of glen walker
Sent: 03 June 2020 16:50
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Yeah, I figured the heuristics might be IP so was looking for more anecdotal info. Anything people might have observed when comparing screen readers. The missing label is the most common one I'm aware of.

But I don't totally agree with your WCAG audit definition but that might be more about terminology.

> A WCAG audit should be done by inspection of the code and user
> interface, *using
tools *where they are helpful.

I consider a screen reader a *tool* used to test for WCAG conformance. It helps me find bugs. I'm not using the screen reader to emulate a user experience. I treat a screen reader like I do a color contrast analyzer or html validator or a page scanning tool or a bookmarklet. It's just one of many tools in my toolbox.



On Wed, Jun 3, 2020 at 8:54 AM Steve Green < = EMAIL ADDRESS REMOVED = >
wrote:

> Since the heuristics are a significant piece of intellectual property
> for an AT vendor, I would be very surprised if any vendor published theirs.
> That said, I too would be very interested if there are such lists.
>
> I would also make the point (again) that there is a difference between
> doing a WCAG audit and an accessibility audit. It is not necessary to
> use any assistive technologies when doing a WCAG audit, and arguably
> you should not use them. A WCAG audit should be done by inspection of
> the code and user interface, using tools where they are helpful. The
> behaviour of assistive technologies is irrelevant and unhelpful.
>
> By contrast, an accessibility audit can be anything you want it to be,
> and you may well choose to include testing the user experience with
> one or more screen readers. The choice of operating system, browser
> and AT should be determined by factors such as your audience and
> contractual obligations. I would go so far as to say it's
> unprofessional to test with a particular platform simply because it's what you've got or what you want to use.
>
> Steve Green
> Managing Director
> Test Partners Ltd
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 15:39
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Not in functionality or features but in how it interprets the
> accessibility tree. For example,
>
> First Name <input>
>
> If a label is not associated with an input element, NVDA will not "guess"
> at what the label should be. It won't say anything except "edit".
> Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> label even though it's not in the accessibility tree.
>
> So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> I've known JAWS has some built in heuristics for fixing bad html but
> wasn't sure what VoiceOver had built in. With respect to input
> labels, JAWS and VoiceOver seem to work the same. Are there other
> heuristics that are similar between the two?
>
> One of the reasons I'm asking is because a customer wants to do all
> their testing on the Mac. I was trying to convince them that there
> might be bugs that are missed because VoiceOver is trying to be nice
> but I wasn't sure how many things VO is nice about. Is there a list
> of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> >

From: John Hicks
Date: Thu, Jun 04 2020 7:47AM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

+1 for Steve's blogs

... on a related note, and during "lockdown" I have been finding it hard on
most video conferencing tools to get soundcard audio to the audience ... so
the work around was to use an ipad on the side and demonstrate these badly
labelled radio buttons ... alas, VoiceOver, like you point out, doesn't
cooperate!

(ps : anyone got a fix for getting NVDA audio into the mix for ... Teams,
Googlemeets, etc ? I know it might work with zoom ("Use computer sound"
but have not found that parameter elsewhere).



Le jeu. 4 juin 2020 à 15:33, Jonathan Avila < = EMAIL ADDRESS REMOVED = > a
écrit :

> > One example is that the search feature in browser developer tools
> sometimes reports no matches for a search string even though there are
> matches.
>
> I've run into this -- this makes me feel much better to hear others have
> run into this as it's impacted me in time sensitive situations. I've
> definitely been able to reproduce this in Chrome in the past.
>
> Jonathan
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Steve Green
> Sent: Thursday, June 4, 2020 5:32 AM
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you recognize the sender and know
> the content is safe.
>
>
> Pretty much every week I find ways in which all sorts of tools provide
> incorrect or misleading results. I disseminate this knowledge among our
> team and I will soon turn these emails into blogs on our website.
>
> One example is that the search feature in browser developer tools
> sometimes reports no matches for a search string even though there are
> matches. We use lots of bookmarklets that help to test a single WCAG
> success criterion, but sometimes they give the wrong result. We are
> increasingly using SortSite to do automated testing after a manual WCAG
> audit, and I find bugs in SortSite every time I use it. To their credit,
> SortSite acknowledge and fix the bugs, but there always seem to be more.
>
> I can't really comment on whether JAWS' heuristics are improving because
> our test process is designed such that we avoid them. The typical sequence
> of events is that we do the WCAG audit by code inspection, then the client
> fixes all the issues, then we do a screen reader review. At that point the
> screen reader does not need to rely on any heuristics because everything
> has been fixed. The issues identified in the screen reader review are
> therefore mostly screen reader bugs, cognitive issues and intentional
> behaviours that are a poor user experience. The latter typically includes
> any feature that uses application mode.
>
> Steve
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Murphy, Sean
> Sent: 03 June 2020 22:51
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Steve,
>
> What tools are you referring to in the below statement?
>
> "By all means use a screen reader to help find bugs, but as with all tools
> you need to be aware that it will lie to you, so you need to protect
> yourself against that. As we find better testing tools and techniques
> (mostly single purpose tools) I find I hardly use a screen reader at all
> during a WCAG audit. The bugs and heuristics have caused me to lose
> confidence in anything they tell me - they are probably the most inaccurate
> tool in our toolbox."
>
> From your testing, has the heuristic improved over time or do you still
> feel is the same state? I noticed you isolated Jaws, does this also occur
> in other screen readers?
>
>
> Sean
> In relation to
>
>
>
>
>
> Sean Murphy | Accessibility expert/lead
> Digital Accessibility manager
> Telstra Digital Channels | Digital Systems
> Mobile: 0405 129 739 | Desk: (02) 9866-7917
>
> www.telstra.com
>
> This email may contain confidential information.
> If I've sent it to you by accident, please delete it immediately
>
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Steve Green
> Sent: Thursday, 4 June 2020 4:23 AM
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> [External Email] This email was sent from outside the organisation – be
> cautious, particularly with links and attachments.
>
> By all means use a screen reader to help find bugs, but as with all tools
> you need to be aware that it will lie to you, so you need to protect
> yourself against that. As we find better testing tools and techniques
> (mostly single purpose tools) I find I hardly use a screen reader at all
> during a WCAG audit. The bugs and heuristics have caused me to lose
> confidence in anything they tell me - they are probably the most inaccurate
> tool in our toolbox.
>
> Perhaps I am more sensitive to this than most people because I have seen
> so many so-called accessibility consultants do really bad WCAG audits
> because they used screen readers instead of all the other tools that are
> available to us, which give more accurate results. Worse still, they
> invariably report the WCAG non-conformances in terms of the screen reader
> behaviour instead of the coding (probably because they don't understand the
> code, whereas the screen reader behaviour is easy to describe).
>
> Another heuristic is that JAWS has various ways of deciding if a <table>
> element is a layout table, in which case it does not announce the presence
> of the table, the table navigation shortcuts don't work and the contents of
> the <td> elements are concatenated as if they were <span> elements. One
> heuristic is if any cell is larger than a certain size. Another is if the
> table only comprises a single row of <td> elements.
>
> Steve
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 16:50
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Yeah, I figured the heuristics might be IP so was looking for more
> anecdotal info. Anything people might have observed when comparing screen
> readers. The missing label is the most common one I'm aware of.
>
> But I don't totally agree with your WCAG audit definition but that might
> be more about terminology.
>
> > A WCAG audit should be done by inspection of the code and user
> > interface, *using
> tools *where they are helpful.
>
> I consider a screen reader a *tool* used to test for WCAG conformance. It
> helps me find bugs. I'm not using the screen reader to emulate a user
> experience. I treat a screen reader like I do a color contrast analyzer or
> html validator or a page scanning tool or a bookmarklet. It's just one of
> many tools in my toolbox.
>
>
>
> On Wed, Jun 3, 2020 at 8:54 AM Steve Green < = EMAIL ADDRESS REMOVED =
> >
> wrote:
>
> > Since the heuristics are a significant piece of intellectual property
> > for an AT vendor, I would be very surprised if any vendor published
> theirs.
> > That said, I too would be very interested if there are such lists.
> >
> > I would also make the point (again) that there is a difference between
> > doing a WCAG audit and an accessibility audit. It is not necessary to
> > use any assistive technologies when doing a WCAG audit, and arguably
> > you should not use them. A WCAG audit should be done by inspection of
> > the code and user interface, using tools where they are helpful. The
> > behaviour of assistive technologies is irrelevant and unhelpful.
> >
> > By contrast, an accessibility audit can be anything you want it to be,
> > and you may well choose to include testing the user experience with
> > one or more screen readers. The choice of operating system, browser
> > and AT should be determined by factors such as your audience and
> > contractual obligations. I would go so far as to say it's
> > unprofessional to test with a particular platform simply because it's
> what you've got or what you want to use.
> >
> > Steve Green
> > Managing Director
> > Test Partners Ltd
> >
> >
> > -----Original Message-----
> > From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> > glen walker
> > Sent: 03 June 2020 15:39
> > To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> > Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> > respect to the accessibility tree?
> >
> > Not in functionality or features but in how it interprets the
> > accessibility tree. For example,
> >
> > First Name <input>
> >
> > If a label is not associated with an input element, NVDA will not "guess"
> > at what the label should be. It won't say anything except "edit".
> > Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> > label even though it's not in the accessibility tree.
> >
> > So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> > I've known JAWS has some built in heuristics for fixing bad html but
> > wasn't sure what VoiceOver had built in. With respect to input
> > labels, JAWS and VoiceOver seem to work the same. Are there other
> > heuristics that are similar between the two?
> >
> > One of the reasons I'm asking is because a customer wants to do all
> > their testing on the Mac. I was trying to convince them that there
> > might be bugs that are missed because VoiceOver is trying to be nice
> > but I wasn't sure how many things VO is nice about. Is there a list
> > of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > > > > archives at http://webaim.org/discussion/archives
> > > > > > > > archives at http://webaim.org/discussion/archives
> > > >
> > > at http://webaim.org/discussion/archives
> > > > at http://webaim.org/discussion/archives
> > > > at http://webaim.org/discussion/archives
> > > > at http://webaim.org/discussion/archives
> > > > > >

From: Steve Green
Date: Thu, Jun 04 2020 9:07AM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

It's not just Chrome - Firefox has similar issues. As I recall, you get no matches if you include "equals" signs or quotation marks in the search term, but there are other reasons too. Sometimes the tool can't find matches in <iframe> elements. Ironically, Internet Explorer performs better insofar as you can include "equals" signs or quotation marks in the search term.

I also recall that you get no matches if the search term includes a no-break space.

Steve

-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Jonathan Avila
Sent: 04 June 2020 14:33
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

> One example is that the search feature in browser developer tools sometimes reports no matches for a search string even though there are matches.

I've run into this -- this makes me feel much better to hear others have run into this as it's impacted me in time sensitive situations. I've definitely been able to reproduce this in Chrome in the past.

Jonathan

-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
Sent: Thursday, June 4, 2020 5:32 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


Pretty much every week I find ways in which all sorts of tools provide incorrect or misleading results. I disseminate this knowledge among our team and I will soon turn these emails into blogs on our website.

One example is that the search feature in browser developer tools sometimes reports no matches for a search string even though there are matches. We use lots of bookmarklets that help to test a single WCAG success criterion, but sometimes they give the wrong result. We are increasingly using SortSite to do automated testing after a manual WCAG audit, and I find bugs in SortSite every time I use it. To their credit, SortSite acknowledge and fix the bugs, but there always seem to be more.

I can't really comment on whether JAWS' heuristics are improving because our test process is designed such that we avoid them. The typical sequence of events is that we do the WCAG audit by code inspection, then the client fixes all the issues, then we do a screen reader review. At that point the screen reader does not need to rely on any heuristics because everything has been fixed. The issues identified in the screen reader review are therefore mostly screen reader bugs, cognitive issues and intentional behaviours that are a poor user experience. The latter typically includes any feature that uses application mode.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Murphy, Sean
Sent: 03 June 2020 22:51
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Steve,

What tools are you referring to in the below statement?

"By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox."

From your testing, has the heuristic improved over time or do you still feel is the same state? I noticed you isolated Jaws, does this also occur in other screen readers?


Sean
In relation to





Sean Murphy | Accessibility expert/lead
Digital Accessibility manager
Telstra Digital Channels | Digital Systems
Mobile: 0405 129 739 | Desk: (02) 9866-7917

www.telstra.com

This email may contain confidential information.
If I've sent it to you by accident, please delete it immediately



-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
Sent: Thursday, 4 June 2020 4:23 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

[External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.

By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox.

Perhaps I am more sensitive to this than most people because I have seen so many so-called accessibility consultants do really bad WCAG audits because they used screen readers instead of all the other tools that are available to us, which give more accurate results. Worse still, they invariably report the WCAG non-conformances in terms of the screen reader behaviour instead of the coding (probably because they don't understand the code, whereas the screen reader behaviour is easy to describe).

Another heuristic is that JAWS has various ways of deciding if a <table> element is a layout table, in which case it does not announce the presence of the table, the table navigation shortcuts don't work and the contents of the <td> elements are concatenated as if they were <span> elements. One heuristic is if any cell is larger than a certain size. Another is if the table only comprises a single row of <td> elements.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of glen walker
Sent: 03 June 2020 16:50
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Yeah, I figured the heuristics might be IP so was looking for more anecdotal info. Anything people might have observed when comparing screen readers. The missing label is the most common one I'm aware of.

But I don't totally agree with your WCAG audit definition but that might be more about terminology.

> A WCAG audit should be done by inspection of the code and user
> interface, *using
tools *where they are helpful.

I consider a screen reader a *tool* used to test for WCAG conformance. It helps me find bugs. I'm not using the screen reader to emulate a user experience. I treat a screen reader like I do a color contrast analyzer or html validator or a page scanning tool or a bookmarklet. It's just one of many tools in my toolbox.



On Wed, Jun 3, 2020 at 8:54 AM Steve Green < = EMAIL ADDRESS REMOVED = >
wrote:

> Since the heuristics are a significant piece of intellectual property
> for an AT vendor, I would be very surprised if any vendor published theirs.
> That said, I too would be very interested if there are such lists.
>
> I would also make the point (again) that there is a difference between
> doing a WCAG audit and an accessibility audit. It is not necessary to
> use any assistive technologies when doing a WCAG audit, and arguably
> you should not use them. A WCAG audit should be done by inspection of
> the code and user interface, using tools where they are helpful. The
> behaviour of assistive technologies is irrelevant and unhelpful.
>
> By contrast, an accessibility audit can be anything you want it to be,
> and you may well choose to include testing the user experience with
> one or more screen readers. The choice of operating system, browser
> and AT should be determined by factors such as your audience and
> contractual obligations. I would go so far as to say it's
> unprofessional to test with a particular platform simply because it's what you've got or what you want to use.
>
> Steve Green
> Managing Director
> Test Partners Ltd
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 15:39
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Not in functionality or features but in how it interprets the
> accessibility tree. For example,
>
> First Name <input>
>
> If a label is not associated with an input element, NVDA will not "guess"
> at what the label should be. It won't say anything except "edit".
> Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> label even though it's not in the accessibility tree.
>
> So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> I've known JAWS has some built in heuristics for fixing bad html but
> wasn't sure what VoiceOver had built in. With respect to input
> labels, JAWS and VoiceOver seem to work the same. Are there other
> heuristics that are similar between the two?
>
> One of the reasons I'm asking is because a customer wants to do all
> their testing on the Mac. I was trying to convince them that there
> might be bugs that are missed because VoiceOver is trying to be nice
> but I wasn't sure how many things VO is nice about. Is there a list
> of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> >

From: Steve Green
Date: Thu, Jun 04 2020 9:09AM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

The "Use computer sound" does work with screen readers in Zoom meetings on desktop, but the feature is missing on their mobile application. I don't know any other video conferencing software that has this feature.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of John Hicks
Sent: 04 June 2020 14:48
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

+1 for Steve's blogs

... on a related note, and during "lockdown" I have been finding it hard on most video conferencing tools to get soundcard audio to the audience ... so the work around was to use an ipad on the side and demonstrate these badly labelled radio buttons ... alas, VoiceOver, like you point out, doesn't cooperate!

(ps : anyone got a fix for getting NVDA audio into the mix for ... Teams, Googlemeets, etc ? I know it might work with zoom ("Use computer sound"
but have not found that parameter elsewhere).



Le jeu. 4 juin 2020 à 15:33, Jonathan Avila < = EMAIL ADDRESS REMOVED = > a écrit :

> > One example is that the search feature in browser developer tools
> sometimes reports no matches for a search string even though there are
> matches.
>
> I've run into this -- this makes me feel much better to hear others
> have run into this as it's impacted me in time sensitive situations.
> I've definitely been able to reproduce this in Chrome in the past.
>
> Jonathan
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Steve Green
> Sent: Thursday, June 4, 2020 5:32 AM
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> CAUTION: This email originated from outside of the organization. Do
> not click links or open attachments unless you recognize the sender
> and know the content is safe.
>
>
> Pretty much every week I find ways in which all sorts of tools provide
> incorrect or misleading results. I disseminate this knowledge among
> our team and I will soon turn these emails into blogs on our website.
>
> One example is that the search feature in browser developer tools
> sometimes reports no matches for a search string even though there are
> matches. We use lots of bookmarklets that help to test a single WCAG
> success criterion, but sometimes they give the wrong result. We are
> increasingly using SortSite to do automated testing after a manual
> WCAG audit, and I find bugs in SortSite every time I use it. To their
> credit, SortSite acknowledge and fix the bugs, but there always seem to be more.
>
> I can't really comment on whether JAWS' heuristics are improving
> because our test process is designed such that we avoid them. The
> typical sequence of events is that we do the WCAG audit by code
> inspection, then the client fixes all the issues, then we do a screen
> reader review. At that point the screen reader does not need to rely
> on any heuristics because everything has been fixed. The issues
> identified in the screen reader review are therefore mostly screen
> reader bugs, cognitive issues and intentional behaviours that are a
> poor user experience. The latter typically includes any feature that uses application mode.
>
> Steve
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Murphy, Sean
> Sent: 03 June 2020 22:51
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Steve,
>
> What tools are you referring to in the below statement?
>
> "By all means use a screen reader to help find bugs, but as with all
> tools you need to be aware that it will lie to you, so you need to
> protect yourself against that. As we find better testing tools and
> techniques (mostly single purpose tools) I find I hardly use a screen
> reader at all during a WCAG audit. The bugs and heuristics have caused
> me to lose confidence in anything they tell me - they are probably the
> most inaccurate tool in our toolbox."
>
> From your testing, has the heuristic improved over time or do you
> still feel is the same state? I noticed you isolated Jaws, does this
> also occur in other screen readers?
>
>
> Sean
> In relation to
>
>
>
>
>
> Sean Murphy | Accessibility expert/lead Digital Accessibility manager
> Telstra Digital Channels | Digital Systems
> Mobile: 0405 129 739 | Desk: (02) 9866-7917
>
> www.telstra.com
>
> This email may contain confidential information.
> If I've sent it to you by accident, please delete it immediately
>
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Steve Green
> Sent: Thursday, 4 June 2020 4:23 AM
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> [External Email] This email was sent from outside the organisation –
> be cautious, particularly with links and attachments.
>
> By all means use a screen reader to help find bugs, but as with all
> tools you need to be aware that it will lie to you, so you need to
> protect yourself against that. As we find better testing tools and
> techniques (mostly single purpose tools) I find I hardly use a screen
> reader at all during a WCAG audit. The bugs and heuristics have caused
> me to lose confidence in anything they tell me - they are probably the
> most inaccurate tool in our toolbox.
>
> Perhaps I am more sensitive to this than most people because I have
> seen so many so-called accessibility consultants do really bad WCAG
> audits because they used screen readers instead of all the other tools
> that are available to us, which give more accurate results. Worse
> still, they invariably report the WCAG non-conformances in terms of
> the screen reader behaviour instead of the coding (probably because
> they don't understand the code, whereas the screen reader behaviour is easy to describe).
>
> Another heuristic is that JAWS has various ways of deciding if a
> <table> element is a layout table, in which case it does not announce
> the presence of the table, the table navigation shortcuts don't work
> and the contents of the <td> elements are concatenated as if they were
> <span> elements. One heuristic is if any cell is larger than a certain
> size. Another is if the table only comprises a single row of <td> elements.
>
> Steve
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 16:50
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Yeah, I figured the heuristics might be IP so was looking for more
> anecdotal info. Anything people might have observed when comparing
> screen readers. The missing label is the most common one I'm aware of.
>
> But I don't totally agree with your WCAG audit definition but that
> might be more about terminology.
>
> > A WCAG audit should be done by inspection of the code and user
> > interface, *using
> tools *where they are helpful.
>
> I consider a screen reader a *tool* used to test for WCAG conformance.
> It helps me find bugs. I'm not using the screen reader to emulate a
> user experience. I treat a screen reader like I do a color contrast
> analyzer or html validator or a page scanning tool or a bookmarklet.
> It's just one of many tools in my toolbox.
>
>
>
> On Wed, Jun 3, 2020 at 8:54 AM Steve Green
> < = EMAIL ADDRESS REMOVED =
> >
> wrote:
>
> > Since the heuristics are a significant piece of intellectual
> > property for an AT vendor, I would be very surprised if any vendor
> > published
> theirs.
> > That said, I too would be very interested if there are such lists.
> >
> > I would also make the point (again) that there is a difference
> > between doing a WCAG audit and an accessibility audit. It is not
> > necessary to use any assistive technologies when doing a WCAG audit,
> > and arguably you should not use them. A WCAG audit should be done by
> > inspection of the code and user interface, using tools where they
> > are helpful. The behaviour of assistive technologies is irrelevant and unhelpful.
> >
> > By contrast, an accessibility audit can be anything you want it to
> > be, and you may well choose to include testing the user experience
> > with one or more screen readers. The choice of operating system,
> > browser and AT should be determined by factors such as your audience
> > and contractual obligations. I would go so far as to say it's
> > unprofessional to test with a particular platform simply because
> > it's
> what you've got or what you want to use.
> >
> > Steve Green
> > Managing Director
> > Test Partners Ltd
> >
> >
> > -----Original Message-----
> > From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf
> > Of glen walker
> > Sent: 03 June 2020 15:39
> > To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> > Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> > respect to the accessibility tree?
> >
> > Not in functionality or features but in how it interprets the
> > accessibility tree. For example,
> >
> > First Name <input>
> >
> > If a label is not associated with an input element, NVDA will not "guess"
> > at what the label should be. It won't say anything except "edit".
> > Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> > label even though it's not in the accessibility tree.
> >
> > So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> > I've known JAWS has some built in heuristics for fixing bad html but
> > wasn't sure what VoiceOver had built in. With respect to input
> > labels, JAWS and VoiceOver seem to work the same. Are there other
> > heuristics that are similar between the two?
> >
> > One of the reasons I'm asking is because a customer wants to do all
> > their testing on the Mac. I was trying to convince them that there
> > might be bugs that are missed because VoiceOver is trying to be nice
> > but I wasn't sure how many things VO is nice about. Is there a list
> > of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > > > > archives at http://webaim.org/discussion/archives
> > > > > > > > archives at http://webaim.org/discussion/archives
> > > >
> > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> >

From: Jonathan Avila
Date: Thu, Jun 04 2020 9:12AM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | Next message →

I've been able to use reflector to share my iPhone to my desktop and then share the sound from that using Zoom into the video conference.

Jonathan

-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
Sent: Thursday, June 4, 2020 11:10 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


The "Use computer sound" does work with screen readers in Zoom meetings on desktop, but the feature is missing on their mobile application. I don't know any other video conferencing software that has this feature.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of John Hicks
Sent: 04 June 2020 14:48
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

+1 for Steve's blogs

... on a related note, and during "lockdown" I have been finding it hard on most video conferencing tools to get soundcard audio to the audience ... so the work around was to use an ipad on the side and demonstrate these badly labelled radio buttons ... alas, VoiceOver, like you point out, doesn't cooperate!

(ps : anyone got a fix for getting NVDA audio into the mix for ... Teams, Googlemeets, etc ? I know it might work with zoom ("Use computer sound"
but have not found that parameter elsewhere).



Le jeu. 4 juin 2020 à 15:33, Jonathan Avila < = EMAIL ADDRESS REMOVED = > a écrit :

> > One example is that the search feature in browser developer tools
> sometimes reports no matches for a search string even though there are
> matches.
>
> I've run into this -- this makes me feel much better to hear others
> have run into this as it's impacted me in time sensitive situations.
> I've definitely been able to reproduce this in Chrome in the past.
>
> Jonathan
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Steve Green
> Sent: Thursday, June 4, 2020 5:32 AM
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> CAUTION: This email originated from outside of the organization. Do
> not click links or open attachments unless you recognize the sender
> and know the content is safe.
>
>
> Pretty much every week I find ways in which all sorts of tools provide
> incorrect or misleading results. I disseminate this knowledge among
> our team and I will soon turn these emails into blogs on our website.
>
> One example is that the search feature in browser developer tools
> sometimes reports no matches for a search string even though there are
> matches. We use lots of bookmarklets that help to test a single WCAG
> success criterion, but sometimes they give the wrong result. We are
> increasingly using SortSite to do automated testing after a manual
> WCAG audit, and I find bugs in SortSite every time I use it. To their
> credit, SortSite acknowledge and fix the bugs, but there always seem to be more.
>
> I can't really comment on whether JAWS' heuristics are improving
> because our test process is designed such that we avoid them. The
> typical sequence of events is that we do the WCAG audit by code
> inspection, then the client fixes all the issues, then we do a screen
> reader review. At that point the screen reader does not need to rely
> on any heuristics because everything has been fixed. The issues
> identified in the screen reader review are therefore mostly screen
> reader bugs, cognitive issues and intentional behaviours that are a
> poor user experience. The latter typically includes any feature that uses application mode.
>
> Steve
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Murphy, Sean
> Sent: 03 June 2020 22:51
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Steve,
>
> What tools are you referring to in the below statement?
>
> "By all means use a screen reader to help find bugs, but as with all
> tools you need to be aware that it will lie to you, so you need to
> protect yourself against that. As we find better testing tools and
> techniques (mostly single purpose tools) I find I hardly use a screen
> reader at all during a WCAG audit. The bugs and heuristics have caused
> me to lose confidence in anything they tell me - they are probably the
> most inaccurate tool in our toolbox."
>
> From your testing, has the heuristic improved over time or do you
> still feel is the same state? I noticed you isolated Jaws, does this
> also occur in other screen readers?
>
>
> Sean
> In relation to
>
>
>
>
>
> Sean Murphy | Accessibility expert/lead Digital Accessibility manager
> Telstra Digital Channels | Digital Systems
> Mobile: 0405 129 739 | Desk: (02) 9866-7917
>
> www.telstra.com
>
> This email may contain confidential information.
> If I've sent it to you by accident, please delete it immediately
>
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Steve Green
> Sent: Thursday, 4 June 2020 4:23 AM
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> [External Email] This email was sent from outside the organisation –
> be cautious, particularly with links and attachments.
>
> By all means use a screen reader to help find bugs, but as with all
> tools you need to be aware that it will lie to you, so you need to
> protect yourself against that. As we find better testing tools and
> techniques (mostly single purpose tools) I find I hardly use a screen
> reader at all during a WCAG audit. The bugs and heuristics have caused
> me to lose confidence in anything they tell me - they are probably the
> most inaccurate tool in our toolbox.
>
> Perhaps I am more sensitive to this than most people because I have
> seen so many so-called accessibility consultants do really bad WCAG
> audits because they used screen readers instead of all the other tools
> that are available to us, which give more accurate results. Worse
> still, they invariably report the WCAG non-conformances in terms of
> the screen reader behaviour instead of the coding (probably because
> they don't understand the code, whereas the screen reader behaviour is easy to describe).
>
> Another heuristic is that JAWS has various ways of deciding if a
> <table> element is a layout table, in which case it does not announce
> the presence of the table, the table navigation shortcuts don't work
> and the contents of the <td> elements are concatenated as if they were
> <span> elements. One heuristic is if any cell is larger than a certain
> size. Another is if the table only comprises a single row of <td> elements.
>
> Steve
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 16:50
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Yeah, I figured the heuristics might be IP so was looking for more
> anecdotal info. Anything people might have observed when comparing
> screen readers. The missing label is the most common one I'm aware of.
>
> But I don't totally agree with your WCAG audit definition but that
> might be more about terminology.
>
> > A WCAG audit should be done by inspection of the code and user
> > interface, *using
> tools *where they are helpful.
>
> I consider a screen reader a *tool* used to test for WCAG conformance.
> It helps me find bugs. I'm not using the screen reader to emulate a
> user experience. I treat a screen reader like I do a color contrast
> analyzer or html validator or a page scanning tool or a bookmarklet.
> It's just one of many tools in my toolbox.
>
>
>
> On Wed, Jun 3, 2020 at 8:54 AM Steve Green
> < = EMAIL ADDRESS REMOVED =
> >
> wrote:
>
> > Since the heuristics are a significant piece of intellectual
> > property for an AT vendor, I would be very surprised if any vendor
> > published
> theirs.
> > That said, I too would be very interested if there are such lists.
> >
> > I would also make the point (again) that there is a difference
> > between doing a WCAG audit and an accessibility audit. It is not
> > necessary to use any assistive technologies when doing a WCAG audit,
> > and arguably you should not use them. A WCAG audit should be done by
> > inspection of the code and user interface, using tools where they
> > are helpful. The behaviour of assistive technologies is irrelevant and unhelpful.
> >
> > By contrast, an accessibility audit can be anything you want it to
> > be, and you may well choose to include testing the user experience
> > with one or more screen readers. The choice of operating system,
> > browser and AT should be determined by factors such as your audience
> > and contractual obligations. I would go so far as to say it's
> > unprofessional to test with a particular platform simply because
> > it's
> what you've got or what you want to use.
> >
> > Steve Green
> > Managing Director
> > Test Partners Ltd
> >
> >
> > -----Original Message-----
> > From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf
> > Of glen walker
> > Sent: 03 June 2020 15:39
> > To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> > Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> > respect to the accessibility tree?
> >
> > Not in functionality or features but in how it interprets the
> > accessibility tree. For example,
> >
> > First Name <input>
> >
> > If a label is not associated with an input element, NVDA will not "guess"
> > at what the label should be. It won't say anything except "edit".
> > Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> > label even though it's not in the accessibility tree.
> >
> > So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> > I've known JAWS has some built in heuristics for fixing bad html but
> > wasn't sure what VoiceOver had built in. With respect to input
> > labels, JAWS and VoiceOver seem to work the same. Are there other
> > heuristics that are similar between the two?
> >
> > One of the reasons I'm asking is because a customer wants to do all
> > their testing on the Mac. I was trying to convince them that there
> > might be bugs that are missed because VoiceOver is trying to be nice
> > but I wasn't sure how many things VO is nice about. Is there a list
> > of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > > > > archives at http://webaim.org/discussion/archives
> > > > > > > > archives at http://webaim.org/discussion/archives
> > > >
> > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> >

From: Murphy, Sean
Date: Thu, Jun 04 2020 3:19PM
Subject: Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?
← Previous message | No next message

All,

Zoom and Teams on windows does support sending Audio output to your meeting. In the share screen option for teams, there is an audio option. Zoom there is a similar option. WebEx from cisco last time I checked did not have this ability.

I have used Zoom and Teams multiple times to share audio such as NVDA or Jaws.

TIP: If you want to share an iDevice like iPhone, you have to use a Mac and Quick Player. This will record and play your mobile device. Then you can use Teams and Zoom. Be aware, I have not validated if Teams on Mac supports audio in the screen share. I would be surprised if it did not. Android I have not looked at. But if you can get the android screen and audio showing on your windows or Mac. Then I don't see any reason why it would not work.

PowerPoint also can do screen recording now.

There is a tool called virtual cable on windows that can simulate multiple audio inputs from different devices to be sent via your audio output. I don't use this tool any more due to the above.


I hope this helps.

Sean



Sean Murphy | Digital System specialist (Accessibility)
Telstra Digital Channels | Digital Systems
Mobile: 0405 129 739 | Desk: (02) 9866-7917
Digital Systems Launch Page
Accessibility Single source of Truth

-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Jonathan Avila
Sent: Friday, 5 June 2020 1:13 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

[External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.

I've been able to use reflector to share my iPhone to my desktop and then share the sound from that using Zoom into the video conference.

Jonathan

-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of Steve Green
Sent: Thursday, June 4, 2020 11:10 AM
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


The "Use computer sound" does work with screen readers in Zoom meetings on desktop, but the feature is missing on their mobile application. I don't know any other video conferencing software that has this feature.

Steve


-----Original Message-----
From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of John Hicks
Sent: 04 June 2020 14:48
To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

+1 for Steve's blogs

... on a related note, and during "lockdown" I have been finding it hard on most video conferencing tools to get soundcard audio to the audience ... so the work around was to use an ipad on the side and demonstrate these badly labelled radio buttons ... alas, VoiceOver, like you point out, doesn't cooperate!

(ps : anyone got a fix for getting NVDA audio into the mix for ... Teams, Googlemeets, etc ? I know it might work with zoom ("Use computer sound"
but have not found that parameter elsewhere).



Le jeu. 4 juin 2020 à 15:33, Jonathan Avila < = EMAIL ADDRESS REMOVED = > a écrit :

> > One example is that the search feature in browser developer tools
> sometimes reports no matches for a search string even though there are
> matches.
>
> I've run into this -- this makes me feel much better to hear others
> have run into this as it's impacted me in time sensitive situations.
> I've definitely been able to reproduce this in Chrome in the past.
>
> Jonathan
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Steve Green
> Sent: Thursday, June 4, 2020 5:32 AM
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> CAUTION: This email originated from outside of the organization. Do
> not click links or open attachments unless you recognize the sender
> and know the content is safe.
>
>
> Pretty much every week I find ways in which all sorts of tools provide
> incorrect or misleading results. I disseminate this knowledge among
> our team and I will soon turn these emails into blogs on our website.
>
> One example is that the search feature in browser developer tools
> sometimes reports no matches for a search string even though there are
> matches. We use lots of bookmarklets that help to test a single WCAG
> success criterion, but sometimes they give the wrong result. We are
> increasingly using SortSite to do automated testing after a manual
> WCAG audit, and I find bugs in SortSite every time I use it. To their
> credit, SortSite acknowledge and fix the bugs, but there always seem to be more.
>
> I can't really comment on whether JAWS' heuristics are improving
> because our test process is designed such that we avoid them. The
> typical sequence of events is that we do the WCAG audit by code
> inspection, then the client fixes all the issues, then we do a screen
> reader review. At that point the screen reader does not need to rely
> on any heuristics because everything has been fixed. The issues
> identified in the screen reader review are therefore mostly screen
> reader bugs, cognitive issues and intentional behaviours that are a
> poor user experience. The latter typically includes any feature that uses application mode.
>
> Steve
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Murphy, Sean
> Sent: 03 June 2020 22:51
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Steve,
>
> What tools are you referring to in the below statement?
>
> "By all means use a screen reader to help find bugs, but as with all
> tools you need to be aware that it will lie to you, so you need to
> protect yourself against that. As we find better testing tools and
> techniques (mostly single purpose tools) I find I hardly use a screen
> reader at all during a WCAG audit. The bugs and heuristics have caused
> me to lose confidence in anything they tell me - they are probably the
> most inaccurate tool in our toolbox."
>
> From your testing, has the heuristic improved over time or do you
> still feel is the same state? I noticed you isolated Jaws, does this
> also occur in other screen readers?
>
>
> Sean
> In relation to
>
>
>
>
>
> Sean Murphy | Accessibility expert/lead Digital Accessibility manager
> Telstra Digital Channels | Digital Systems
> Mobile: 0405 129 739 | Desk: (02) 9866-7917
>
> www.telstra.com
>
> This email may contain confidential information.
> If I've sent it to you by accident, please delete it immediately
>
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> Steve Green
> Sent: Thursday, 4 June 2020 4:23 AM
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> [External Email] This email was sent from outside the organisation –
> be cautious, particularly with links and attachments.
>
> By all means use a screen reader to help find bugs, but as with all
> tools you need to be aware that it will lie to you, so you need to
> protect yourself against that. As we find better testing tools and
> techniques (mostly single purpose tools) I find I hardly use a screen
> reader at all during a WCAG audit. The bugs and heuristics have caused
> me to lose confidence in anything they tell me - they are probably the
> most inaccurate tool in our toolbox.
>
> Perhaps I am more sensitive to this than most people because I have
> seen so many so-called accessibility consultants do really bad WCAG
> audits because they used screen readers instead of all the other tools
> that are available to us, which give more accurate results. Worse
> still, they invariably report the WCAG non-conformances in terms of
> the screen reader behaviour instead of the coding (probably because
> they don't understand the code, whereas the screen reader behaviour is easy to describe).
>
> Another heuristic is that JAWS has various ways of deciding if a
> <table> element is a layout table, in which case it does not announce
> the presence of the table, the table navigation shortcuts don't work
> and the contents of the <td> elements are concatenated as if they were
> <span> elements. One heuristic is if any cell is larger than a certain
> size. Another is if the table only comprises a single row of <td> elements.
>
> Steve
>
>
> -----Original Message-----
> From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf Of
> glen walker
> Sent: 03 June 2020 16:50
> To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Yeah, I figured the heuristics might be IP so was looking for more
> anecdotal info. Anything people might have observed when comparing
> screen readers. The missing label is the most common one I'm aware of.
>
> But I don't totally agree with your WCAG audit definition but that
> might be more about terminology.
>
> > A WCAG audit should be done by inspection of the code and user
> > interface, *using
> tools *where they are helpful.
>
> I consider a screen reader a *tool* used to test for WCAG conformance.
> It helps me find bugs. I'm not using the screen reader to emulate a
> user experience. I treat a screen reader like I do a color contrast
> analyzer or html validator or a page scanning tool or a bookmarklet.
> It's just one of many tools in my toolbox.
>
>
>
> On Wed, Jun 3, 2020 at 8:54 AM Steve Green
> < = EMAIL ADDRESS REMOVED =
> >
> wrote:
>
> > Since the heuristics are a significant piece of intellectual
> > property for an AT vendor, I would be very surprised if any vendor
> > published
> theirs.
> > That said, I too would be very interested if there are such lists.
> >
> > I would also make the point (again) that there is a difference
> > between doing a WCAG audit and an accessibility audit. It is not
> > necessary to use any assistive technologies when doing a WCAG audit,
> > and arguably you should not use them. A WCAG audit should be done by
> > inspection of the code and user interface, using tools where they
> > are helpful. The behaviour of assistive technologies is irrelevant and unhelpful.
> >
> > By contrast, an accessibility audit can be anything you want it to
> > be, and you may well choose to include testing the user experience
> > with one or more screen readers. The choice of operating system,
> > browser and AT should be determined by factors such as your audience
> > and contractual obligations. I would go so far as to say it's
> > unprofessional to test with a particular platform simply because
> > it's
> what you've got or what you want to use.
> >
> > Steve Green
> > Managing Director
> > Test Partners Ltd
> >
> >
> > -----Original Message-----
> > From: WebAIM-Forum < = EMAIL ADDRESS REMOVED = > On Behalf
> > Of glen walker
> > Sent: 03 June 2020 15:39
> > To: WebAIM Discussion List < = EMAIL ADDRESS REMOVED = >
> > Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> > respect to the accessibility tree?
> >
> > Not in functionality or features but in how it interprets the
> > accessibility tree. For example,
> >
> > First Name <input>
> >
> > If a label is not associated with an input element, NVDA will not "guess"
> > at what the label should be. It won't say anything except "edit".
> > Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> > label even though it's not in the accessibility tree.
> >
> > So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> > I've known JAWS has some built in heuristics for fixing bad html but
> > wasn't sure what VoiceOver had built in. With respect to input
> > labels, JAWS and VoiceOver seem to work the same. Are there other
> > heuristics that are similar between the two?
> >
> > One of the reasons I'm asking is because a customer wants to do all
> > their testing on the Mac. I was trying to convince them that there
> > might be bugs that are missed because VoiceOver is trying to be nice
> > but I wasn't sure how many things VO is nice about. Is there a list
> > of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > > > > archives at http://webaim.org/discussion/archives
> > > > > > > > archives at http://webaim.org/discussion/archives
> > > >
> > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> >