WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: Fwd: building accessible javascript accordions?

for

From: Birkir R. Gunnarsson
Date: Aug 6, 2013 7:19AM


I believe there are more general problems with the labeling and
recognition of widgets and other UI components on webpages in general,
and that it is something that requires clarity all the way from coding
to end user experience.
Tabs on websites, as I understand it, look like tabs on desktop apps,
which look like tabs on physical folders.
I had never heard the word tab as a Jaws/NVDA user in the desktop
environment, at least not that I noticed, the word "menu" is always
used for your typical desktop menus.
I still do not know the difference between tabs and a menubar.

Similarly sliders are visual things that, I believe, indicate how to
interact with them, probably because visually they remind people of
how to work physical sliders. screen readers do not announce them as
sliders, so how is the blind end user to know it is a slider and get
the subsequent idea that using the arrow keys is the natural way to
select values on the sliding scale.

It wasn't until Bryan's excellent explanation on the AccDC website
that I fully understood the difference between tabs and accordians.
I have never com across a spin button so I am not sure how they are
handled by screen readers, but I would be surprised if they are
announced as such.
A screen reader cannot know about carousels, there is no specific
carousel role, but may be they can use heuristics to warn the user
that they are entering a section of what appears to be auto updating
content (when it is auto updating).

As for simpler examples of this disconnect between visual appearance
and screen reader presentation:
Often we might let it slide, as accessibility testers, when a button
is implemented and presented as a link to screen readers, even when it
is clearly styled to look like a button. at a glance this may seem
like a non-issue, they functionally work the same way, are keyboard
accessible and are trigger the same way (at lesat for people who
default to the enter key to activate all components, that includes
myself).
But imagine a blind customer calling in to web support or customer
service and using the screen reader's to search for buttons (say the
"b" key), it won't work.
Similarly the role of a button is slightly different from a link, and
there are fewer buttons on a website, which makes it easier for the
user to, say, sign up for the email list, order the flight or check
out of the shopping card. When these are presented as links they blend
in with the other 984 links on the page.

aria-expanded is announced as expanded by Jaws, "open" by NVDA.

All of these individually may be small things, but when taken
together, it is hard for us to expect the end user armed with nothing
but a screen reader (and according to our survey of 350 users in the
spring only 20% of these people get any formal training on how to
browse the web with their screen reader) to understand and interact
with all of these awesome components, despite the best efforts of the
developers to make them keyboard accessible and aria-enabled.

This is my motivation for creating the Cognosco project, something
that will be advertised later on and hopefully will be online within a
couple of months. I am hoping that an online resource can help educate
users about these components, explain the most common ones, their
origin, their visual appearance and how they are announced, or not, by
screenreaders.
It is up to the vendors to interpret aria roles consistently and
clearly, such as announcing sliders as sliders (I have not seen that).
For the author, I am often tempted, and I do it as a best practice or
usability, to recommend that appropriate role is used and even that
authors place a div with an area region with a descriptive aria-label
be used (such as "carousel" "slider" "spintbutton" etc.) around the
widget to indicate the type of the UI component being used
non-visually.
The web is getting more powerful, but also more complex and it places
more demands on all of us.
I still believe that the end user and the end user training are
powerful components that may have been under represented in the
business of making the web accessible. We cannot place all the
accessibility burden on the mainstream developers who have a thousand
other things to worry about. The best they can do is code to standards
and make sure the a.t. software has access to the information.
Cheers
-Birkir

Birkir Gunnarsson
Accessibility Subject Matter Expert | Deque Systems
http://www.deque.com

On 8/6/13, Alastair Campbell < <EMAIL REMOVED> > wrote:
> Léonie Watson wrote:
>> The role of tab causes screen readers (with appropriate ARIA support) to
>> announce "tab". That's the cue to switch to using the left/right cursor
>> keys
>> to navigate.
>
> I've been reading up on the JAWs and NVDA docs to try and find any
> reference to that, and I'm struggling. How would regular users know
> about it?
>
> Left/right generally reads by character, and the only reference to
> "tabs" in the user-docs that I can find is for the tab key.
>
> Admittedly I have a very web-focused outlook, but it's no wonder that
> a regular (non technical NVDA) user in testing exclaimed "it said tab,
> so I pressed tab!".
>
> After reading the Freedom Scientific ARIA doc, the ARIA tab-pattern
> should definitely not be used for accordions unless you use
> role="document" for any non-form content, as it doesn't expect content
> in that scenario.
>
>
>> It's the same cue that screen readers give when encountering a
>> tabbed interface in a software application.
>
> Perhaps a different cue is needed, as it might be people are not
> expecting that in a website?
>
> -Alastair
> > > >