WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

for

From: Murphy, Sean
Date: Jun 4, 2020 3:38AM


Steve,

Thanks for this. blogs you are outlining would be valuable to the community.

Sean




Sean Murphy | Digital System specialist (Accessibility)
Telstra Digital Channels | Digital Systems
Mobile: 0405 129 739 | Desk: (02) 9866-7917
Digital Systems Launch Page
Accessibility Single source of Truth

-----Original Message-----
From: WebAIM-Forum < <EMAIL REMOVED> > On Behalf Of Steve Green
Sent: Thursday, 4 June 2020 7:32 PM
To: WebAIM Discussion List < <EMAIL REMOVED> >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

[External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.

Pretty much every week I find ways in which all sorts of tools provide incorrect or misleading results. I disseminate this knowledge among our team and I will soon turn these emails into blogs on our website.

One example is that the search feature in browser developer tools sometimes reports no matches for a search string even though there are matches. We use lots of bookmarklets that help to test a single WCAG success criterion, but sometimes they give the wrong result. We are increasingly using SortSite to do automated testing after a manual WCAG audit, and I find bugs in SortSite every time I use it. To their credit, SortSite acknowledge and fix the bugs, but there always seem to be more.

I can't really comment on whether JAWS' heuristics are improving because our test process is designed such that we avoid them. The typical sequence of events is that we do the WCAG audit by code inspection, then the client fixes all the issues, then we do a screen reader review. At that point the screen reader does not need to rely on any heuristics because everything has been fixed. The issues identified in the screen reader review are therefore mostly screen reader bugs, cognitive issues and intentional behaviours that are a poor user experience. The latter typically includes any feature that uses application mode.

Steve


-----Original Message-----
From: WebAIM-Forum < <EMAIL REMOVED> > On Behalf Of Murphy, Sean
Sent: 03 June 2020 22:51
To: WebAIM Discussion List < <EMAIL REMOVED> >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Steve,

What tools are you referring to in the below statement?

"By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox."

From your testing, has the heuristic improved over time or do you still feel is the same state? I noticed you isolated Jaws, does this also occur in other screen readers?


Sean
In relation to





Sean Murphy | Accessibility expert/lead
Digital Accessibility manager
Telstra Digital Channels | Digital Systems
Mobile: 0405 129 739 | Desk: (02) 9866-7917

www.telstra.com

This email may contain confidential information.
If I've sent it to you by accident, please delete it immediately



-----Original Message-----
From: WebAIM-Forum < <EMAIL REMOVED> > On Behalf Of Steve Green
Sent: Thursday, 4 June 2020 4:23 AM
To: WebAIM Discussion List < <EMAIL REMOVED> >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

[External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments.

By all means use a screen reader to help find bugs, but as with all tools you need to be aware that it will lie to you, so you need to protect yourself against that. As we find better testing tools and techniques (mostly single purpose tools) I find I hardly use a screen reader at all during a WCAG audit. The bugs and heuristics have caused me to lose confidence in anything they tell me - they are probably the most inaccurate tool in our toolbox.

Perhaps I am more sensitive to this than most people because I have seen so many so-called accessibility consultants do really bad WCAG audits because they used screen readers instead of all the other tools that are available to us, which give more accurate results. Worse still, they invariably report the WCAG non-conformances in terms of the screen reader behaviour instead of the coding (probably because they don't understand the code, whereas the screen reader behaviour is easy to describe).

Another heuristic is that JAWS has various ways of deciding if a <table> element is a layout table, in which case it does not announce the presence of the table, the table navigation shortcuts don't work and the contents of the <td> elements are concatenated as if they were <span> elements. One heuristic is if any cell is larger than a certain size. Another is if the table only comprises a single row of <td> elements.

Steve


-----Original Message-----
From: WebAIM-Forum < <EMAIL REMOVED> > On Behalf Of glen walker
Sent: 03 June 2020 16:50
To: WebAIM Discussion List < <EMAIL REMOVED> >
Subject: Re: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with respect to the accessibility tree?

Yeah, I figured the heuristics might be IP so was looking for more anecdotal info. Anything people might have observed when comparing screen readers. The missing label is the most common one I'm aware of.

But I don't totally agree with your WCAG audit definition but that might be more about terminology.

> A WCAG audit should be done by inspection of the code and user
> interface, *using
tools *where they are helpful.

I consider a screen reader a *tool* used to test for WCAG conformance. It helps me find bugs. I'm not using the screen reader to emulate a user experience. I treat a screen reader like I do a color contrast analyzer or html validator or a page scanning tool or a bookmarklet. It's just one of many tools in my toolbox.



On Wed, Jun 3, 2020 at 8:54 AM Steve Green < <EMAIL REMOVED> >
wrote:

> Since the heuristics are a significant piece of intellectual property
> for an AT vendor, I would be very surprised if any vendor published theirs.
> That said, I too would be very interested if there are such lists.
>
> I would also make the point (again) that there is a difference between
> doing a WCAG audit and an accessibility audit. It is not necessary to
> use any assistive technologies when doing a WCAG audit, and arguably
> you should not use them. A WCAG audit should be done by inspection of
> the code and user interface, using tools where they are helpful. The
> behaviour of assistive technologies is irrelevant and unhelpful.
>
> By contrast, an accessibility audit can be anything you want it to be,
> and you may well choose to include testing the user experience with
> one or more screen readers. The choice of operating system, browser
> and AT should be determined by factors such as your audience and
> contractual obligations. I would go so far as to say it's
> unprofessional to test with a particular platform simply because it's what you've got or what you want to use.
>
> Steve Green
> Managing Director
> Test Partners Ltd
>
>
> -----Original Message-----
> From: WebAIM-Forum < <EMAIL REMOVED> > On Behalf Of
> glen walker
> Sent: 03 June 2020 15:39
> To: WebAIM Discussion List < <EMAIL REMOVED> >
> Subject: [WebAIM] Is VoiceOver more similar to NVDA or JAWS with
> respect to the accessibility tree?
>
> Not in functionality or features but in how it interprets the
> accessibility tree. For example,
>
> First Name <input>
>
> If a label is not associated with an input element, NVDA will not "guess"
> at what the label should be. It won't say anything except "edit".
> Both JAWS and VoiceOver (Mac and iOS) will say "First Name" for the
> label even though it's not in the accessibility tree.
>
> So for testing purposes, NVDA is more "pure" and can help find a11y bugs.
> I've known JAWS has some built in heuristics for fixing bad html but
> wasn't sure what VoiceOver had built in. With respect to input
> labels, JAWS and VoiceOver seem to work the same. Are there other
> heuristics that are similar between the two?
>
> One of the reasons I'm asking is because a customer wants to do all
> their testing on the Mac. I was trying to convince them that there
> might be bugs that are missed because VoiceOver is trying to be nice
> but I wasn't sure how many things VO is nice about. Is there a list
> of heuristics that both JAWS and VoiceOver have to overcome bad html?
> > > archives at http://webaim.org/discussion/archives
> > > > archives at http://webaim.org/discussion/archives
> >