WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: Examples of virtual buffer user experience

for

From: Birkir R. Gunnarsson
Date: Jan 15, 2021 9:39AM


The biggest difference is that screen readers in browse mode send
simulated click events instead of keyboard events.
They have their own heuristics, some of which are user configurable as
to when they automatically switch between browse and forms mode.
They don't for checkboxes, links and buttons, which is why you notice
the biggest difference with those elements. Voiceover does not use
this logic, I believe it always sends the keyboard event.
For other widgets like menus, tabs, radio buttons and dropdowns
Windows screen readers switch modes, for the most part. For things
like grids the behavior is mixed.
There are numerous other minor ifferences in whether and how screen
readers communicate things like landmarks, live regions, modal dialogs
and other complex elements and ARIA, those have to be sorted out with
testing and filing bugs (see, for instance, bugs I have filed on Jaws
and NVDA not supporting token values of the aria-haspopup attribute).
These issues are almost all based around custom widgets, if actual
HtML is used, the browser handles it, usually correctly.
WE all must continue to file bugs and report concerns to improve and
homogenize support for custom widgets, but at the end of the day this
is the cost of writing or relying on custom widgets over HTML.


On 1/12/21, glen walker < <EMAIL REMOVED> > wrote:
> If you're talking about web testing and not native apps, so that you have
> access to the html code behind the scenes, if I find a problem with one
> screen reader (doesn't matter which one), I look at the html code for that
> element to see what the problem is. If it's an error in the html code
> (whether incorrect syntax, missing attribute, wrong ARIA attribute, etc),
> then most likely you will hear the problem with all screen readers. Each
> screen reader might surface the problem differently, and sometimes a screen
> reader will use heuristics to "work around" the bad code such that you
> don't hear the problem, but it's still a problem.
>
> There are too many differences between VO, NVDA, and JAWS to list them
> all. Some might say "pressed" or "selected". Some might say "landmark" or
> "region". Some might say "clickable" or remain silent. If you have a
> block element in a link (<div>, <p>, etc), then VO will make each block
> element a separate VO "tab stop" and it will sound like you have several
> links when in fact you only have one. That's just how VO works. You can
> code around it specifically for VO by using the undocument role="text" but
> unless the link is causing a serious UX issue, I don't recommend that. I
> always lean towards writing spec-compliant html and if a particular screen
> reader has a problem with it, it's usually the screen reader's fault.
> Whether you want to work around it is an internal decision.
>
> Another difference is with lists. If you turn off the list style, then VO
> does not think a <ul>/<ol> is a list anymore. You have to add back in
> role="list" and role="listitem". I haven't tested that recently but I
> think it's still a VO problem.
>
>
>
> On Tue, Jan 12, 2021 at 12:58 PM cb < <EMAIL REMOVED> > wrote:
>
>> Hey all,
>>
>> I'm looking for concrete examples of how user experience differs between
>> VoiceOver on Mac and Jaws or NVDA on Windows. (Or more generally, between
>> a
>> screenreader that uses a virtual buffer and one that doesn't.)
>>
>> The context is that my colleagues and I do a lot of talking to developers
>> who use the Mac platform and test their own code for accessibility using
>> VoiceOver. If we report a bug we've discovered via Jaws or NVDA, we often
>> get pushback that they can't reproduce it on VoiceOver.
>>
>> I can send them the WebAIM screenreader survey results that show
>> demographics and usage statistics, and I can talk generally about the
>> differences between the tools, but I'd love to have some illustrative
>> examples of types of things they miss when they rely solely on tests
>> conducted with VoiceOver. These could be accessibility violations, bugs,
>> big differences in UX, etc.
>>
>> Have you run across something that makes a good example that I could
>> explain to people with varying levels of coding and accessibility
>> expertise? And for my own education, I'd like to hear more about the
>> low-level differences between platforms so I can get better at diagnosing
>> these issues.
>>
>> Thanks
>>
>> Caroline
>> >> >> >> >>
> > > > >


--
Work hard. Have fun. Make history.