WebAIM - Web Accessibility In Mind

E-mail List Archives

Thread: Examples of virtual buffer user experience

for

Number of posts in this thread: 4 (In chronological order)

From: cb
Date: Tue, Jan 12 2021 12:57PM
Subject: Examples of virtual buffer user experience
No previous message | Next message →

Hey all,

I'm looking for concrete examples of how user experience differs between
VoiceOver on Mac and Jaws or NVDA on Windows. (Or more generally, between a
screenreader that uses a virtual buffer and one that doesn't.)

The context is that my colleagues and I do a lot of talking to developers
who use the Mac platform and test their own code for accessibility using
VoiceOver. If we report a bug we've discovered via Jaws or NVDA, we often
get pushback that they can't reproduce it on VoiceOver.

I can send them the WebAIM screenreader survey results that show
demographics and usage statistics, and I can talk generally about the
differences between the tools, but I'd love to have some illustrative
examples of types of things they miss when they rely solely on tests
conducted with VoiceOver. These could be accessibility violations, bugs,
big differences in UX, etc.

Have you run across something that makes a good example that I could
explain to people with varying levels of coding and accessibility
expertise? And for my own education, I'd like to hear more about the
low-level differences between platforms so I can get better at diagnosing
these issues.

Thanks

Caroline

From: Weston Thayer
Date: Tue, Jan 12 2021 1:15PM
Subject: Re: Examples of virtual buffer user experience
← Previous message | Next message →

One real-world example I ran into last year was a custom checkbox component
that properly changed state while using the keyboard using SPACE. It also
worked fine with SPACE while MacOS VoiceOver was on. But navigating to it
with NVDA and hitting SPACE had no effect. Further investigation showed
that SPACE worked with NVDA if you forced it into forms mode, but not in
browse / virtual buffer/cursor mode (which is the default and works fine
with native checkboxes). Interestingly, it also did not work in VoiceOver
if you used VO + SPACE instead of only SPACE.

This strange behavior arises because MacOS VoiceOver doesn't use a virtual
buffer/cursor/browse mode (caveat: it has a semi-browse mode if you turn on
quick nav, but that's off by default). When you press SPACE, the keyboard
event goes through to the target application just like it would if
VoiceOver wasn't running. However, NVDA in browse mode does not send the
SPACE keyboard event. Instead, it uses the OS accessibility APIs to call
the "invoke" method on the node in the browser accessibility tree
currently in the virtual cursor.

The custom checkbox had modified its "click" event handler in a way that
caused the OS accessibility API's "invoke" method to fail (it had a call to
e.stopPropagation(), but I think there were more issues at play). It
appeared to work correctly with MacOS VoiceOver and pressing SPACE because
a different code path was taken (the "keydown" event handler).

Sure, you may have discovered this bug using only MacOS VoiceOver if you
tried VO+SPACE instead of just SPACE, but even if you did you might not
determine the impact is severe without understanding what happens in NVDA
(inoperable by default).

Weston Thayer
https://assistivlabs.com

On Tue, Jan 12, 2021 at 11:58 AM cb < = EMAIL ADDRESS REMOVED = > wrote:

> Hey all,
>
> I'm looking for concrete examples of how user experience differs between
> VoiceOver on Mac and Jaws or NVDA on Windows. (Or more generally, between a
> screenreader that uses a virtual buffer and one that doesn't.)
>
> The context is that my colleagues and I do a lot of talking to developers
> who use the Mac platform and test their own code for accessibility using
> VoiceOver. If we report a bug we've discovered via Jaws or NVDA, we often
> get pushback that they can't reproduce it on VoiceOver.
>
> I can send them the WebAIM screenreader survey results that show
> demographics and usage statistics, and I can talk generally about the
> differences between the tools, but I'd love to have some illustrative
> examples of types of things they miss when they rely solely on tests
> conducted with VoiceOver. These could be accessibility violations, bugs,
> big differences in UX, etc.
>
> Have you run across something that makes a good example that I could
> explain to people with varying levels of coding and accessibility
> expertise? And for my own education, I'd like to hear more about the
> low-level differences between platforms so I can get better at diagnosing
> these issues.
>
> Thanks
>
> Caroline
> > > > >

From: glen walker
Date: Tue, Jan 12 2021 1:27PM
Subject: Re: Examples of virtual buffer user experience
← Previous message | Next message →

If you're talking about web testing and not native apps, so that you have
access to the html code behind the scenes, if I find a problem with one
screen reader (doesn't matter which one), I look at the html code for that
element to see what the problem is. If it's an error in the html code
(whether incorrect syntax, missing attribute, wrong ARIA attribute, etc),
then most likely you will hear the problem with all screen readers. Each
screen reader might surface the problem differently, and sometimes a screen
reader will use heuristics to "work around" the bad code such that you
don't hear the problem, but it's still a problem.

There are too many differences between VO, NVDA, and JAWS to list them
all. Some might say "pressed" or "selected". Some might say "landmark" or
"region". Some might say "clickable" or remain silent. If you have a
block element in a link (<div>, <p>, etc), then VO will make each block
element a separate VO "tab stop" and it will sound like you have several
links when in fact you only have one. That's just how VO works. You can
code around it specifically for VO by using the undocument role="text" but
unless the link is causing a serious UX issue, I don't recommend that. I
always lean towards writing spec-compliant html and if a particular screen
reader has a problem with it, it's usually the screen reader's fault.
Whether you want to work around it is an internal decision.

Another difference is with lists. If you turn off the list style, then VO
does not think a <ul>/<ol> is a list anymore. You have to add back in
role="list" and role="listitem". I haven't tested that recently but I
think it's still a VO problem.



On Tue, Jan 12, 2021 at 12:58 PM cb < = EMAIL ADDRESS REMOVED = > wrote:

> Hey all,
>
> I'm looking for concrete examples of how user experience differs between
> VoiceOver on Mac and Jaws or NVDA on Windows. (Or more generally, between a
> screenreader that uses a virtual buffer and one that doesn't.)
>
> The context is that my colleagues and I do a lot of talking to developers
> who use the Mac platform and test their own code for accessibility using
> VoiceOver. If we report a bug we've discovered via Jaws or NVDA, we often
> get pushback that they can't reproduce it on VoiceOver.
>
> I can send them the WebAIM screenreader survey results that show
> demographics and usage statistics, and I can talk generally about the
> differences between the tools, but I'd love to have some illustrative
> examples of types of things they miss when they rely solely on tests
> conducted with VoiceOver. These could be accessibility violations, bugs,
> big differences in UX, etc.
>
> Have you run across something that makes a good example that I could
> explain to people with varying levels of coding and accessibility
> expertise? And for my own education, I'd like to hear more about the
> low-level differences between platforms so I can get better at diagnosing
> these issues.
>
> Thanks
>
> Caroline
> > > > >

From: Birkir R. Gunnarsson
Date: Fri, Jan 15 2021 9:39AM
Subject: Re: Examples of virtual buffer user experience
← Previous message | No next message

The biggest difference is that screen readers in browse mode send
simulated click events instead of keyboard events.
They have their own heuristics, some of which are user configurable as
to when they automatically switch between browse and forms mode.
They don't for checkboxes, links and buttons, which is why you notice
the biggest difference with those elements. Voiceover does not use
this logic, I believe it always sends the keyboard event.
For other widgets like menus, tabs, radio buttons and dropdowns
Windows screen readers switch modes, for the most part. For things
like grids the behavior is mixed.
There are numerous other minor ifferences in whether and how screen
readers communicate things like landmarks, live regions, modal dialogs
and other complex elements and ARIA, those have to be sorted out with
testing and filing bugs (see, for instance, bugs I have filed on Jaws
and NVDA not supporting token values of the aria-haspopup attribute).
These issues are almost all based around custom widgets, if actual
HtML is used, the browser handles it, usually correctly.
WE all must continue to file bugs and report concerns to improve and
homogenize support for custom widgets, but at the end of the day this
is the cost of writing or relying on custom widgets over HTML.


On 1/12/21, glen walker < = EMAIL ADDRESS REMOVED = > wrote:
> If you're talking about web testing and not native apps, so that you have
> access to the html code behind the scenes, if I find a problem with one
> screen reader (doesn't matter which one), I look at the html code for that
> element to see what the problem is. If it's an error in the html code
> (whether incorrect syntax, missing attribute, wrong ARIA attribute, etc),
> then most likely you will hear the problem with all screen readers. Each
> screen reader might surface the problem differently, and sometimes a screen
> reader will use heuristics to "work around" the bad code such that you
> don't hear the problem, but it's still a problem.
>
> There are too many differences between VO, NVDA, and JAWS to list them
> all. Some might say "pressed" or "selected". Some might say "landmark" or
> "region". Some might say "clickable" or remain silent. If you have a
> block element in a link (<div>, <p>, etc), then VO will make each block
> element a separate VO "tab stop" and it will sound like you have several
> links when in fact you only have one. That's just how VO works. You can
> code around it specifically for VO by using the undocument role="text" but
> unless the link is causing a serious UX issue, I don't recommend that. I
> always lean towards writing spec-compliant html and if a particular screen
> reader has a problem with it, it's usually the screen reader's fault.
> Whether you want to work around it is an internal decision.
>
> Another difference is with lists. If you turn off the list style, then VO
> does not think a <ul>/<ol> is a list anymore. You have to add back in
> role="list" and role="listitem". I haven't tested that recently but I
> think it's still a VO problem.
>
>
>
> On Tue, Jan 12, 2021 at 12:58 PM cb < = EMAIL ADDRESS REMOVED = > wrote:
>
>> Hey all,
>>
>> I'm looking for concrete examples of how user experience differs between
>> VoiceOver on Mac and Jaws or NVDA on Windows. (Or more generally, between
>> a
>> screenreader that uses a virtual buffer and one that doesn't.)
>>
>> The context is that my colleagues and I do a lot of talking to developers
>> who use the Mac platform and test their own code for accessibility using
>> VoiceOver. If we report a bug we've discovered via Jaws or NVDA, we often
>> get pushback that they can't reproduce it on VoiceOver.
>>
>> I can send them the WebAIM screenreader survey results that show
>> demographics and usage statistics, and I can talk generally about the
>> differences between the tools, but I'd love to have some illustrative
>> examples of types of things they miss when they rely solely on tests
>> conducted with VoiceOver. These could be accessibility violations, bugs,
>> big differences in UX, etc.
>>
>> Have you run across something that makes a good example that I could
>> explain to people with varying levels of coding and accessibility
>> expertise? And for my own education, I'd like to hear more about the
>> low-level differences between platforms so I can get better at diagnosing
>> these issues.
>>
>> Thanks
>>
>> Caroline
>> >> >> >> >>
> > > > >


--
Work hard. Have fun. Make history.