WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: Accessible Superfish-like drop-down menus?

for

From: Matt King
Date: Jun 16, 2017 12:46PM


> Yes, the Authoring Practices guide has been a life saver for some things, and I refer to it--and refer others to it--often.
> However, as Lucy mentioned, the menu pattern described there is not appropriate for site menus, only application menus.

The ARIA menu pattern can work very well for site menus. It can be used for any menu that follows the style of interaction described in the pattern, regardless of the purpose of the menu items. So, whether it is appropriate has nothing to do with where the menu is or where the menu items take the user. It is all about what kind of interaction model you want.

> > Note that if you want to overload a single focusable element to both
> > open a submenu and execute a link, that widget needs to be a menuitem,
> > not a menu button.
>
> I've seen this done before, but I think the idea of a
> click event on a menu item pulling double duty as a trigger for the submenu
> (first click) and a link that takes you places (second click) is really bad.
> I don't think it really fits the menuitem role, either.

Sorry, that is not what I meant, and I agree that is bad. I was talking about hovering on a menu name reveals its submenu and clicking it goes to a page. We can do that with ARIA menus. Although, I don't that is great either.

> The issue, as I see it, boils down to the fact that the interaction patterns users have come to expect on the web don't track with the system-level widgets that ARIA is attempting to map to.

I see why you might say that. But, that is assuming you have to limit use of ARIA and the system level accessibility APIs to mirroring the exact interactions you see on the desktop. ARIA isn't really that limitted. It is better to think of ARIA as a sort of non-visual CSS and the accessibility APIs are like display drivers.

The key here is understanding how those APIs are interpreting the ARIA and how different roles, states, and properties can or cannot work together and what is or is not understandable by users. Unfortunately, as you have already mentioned, the tools available to help you do that are non-existent. Of course, the ridiculous complexity of screen reader user interfaces doesn't help.

> The site navigation fly-out menu is not a system menu,

One possible implementation is like that. But, that is not the only way.

> and the expander for a submenu is not really a button menu (I don't think).

Right, it is only a menu button if it opens a menu that works like what you call a "system menu." That is, a menu that manages focus.

> That is why I tried to use the disclosure pattern,
> as it seemed to be the most agnostic about what it was doing
> and, I thought, wouldn't require me to manage focus or the active descendent.

Spot on.

> I think web users expect to be able to tab between links on a page,
> rather than have submenus function as composite widgets that have to be navigated by using arrow keys.

Sticking to 1995 web keyboard design with long tab rings is an option, but that approach treats keyboard users like second class citizens. Helping them take advantage of GUI conventions on the web is like how we moved people away from DOS command lines to GUIs. It is a paradigm shift, and we need to realize that as we design.

> - Mouse users expect to hover over a menu link and have it expand,
> and also to be able to click on that link and have it take them to the index page for the submenu items.
> - Keyboard users should be able to get to links in the menu without
> having to tab through every link in the menu,
> though they also expect to be able to tab between links, generally.

Where designs fall down here is that there are no visual conventions for helping users navigate with the keyboard. For instance, how do they know when arrow key navigation is available? This lack of affordance perceivability is an oversight in desktop visual design that persists on the web, and it is something I desparately want to help rectify. Learning to operate exclusively with the keyboard is unnecessarily difficult. Why do mouse users get different shaped pointers? Because they are first-class citizens.

> - Screen reader users should be able to navigate the links using as many
> useful cues and affordances as possible,
> but without being provided cues or affordances that are inaccurate.

Correct. one of the tricky parts of this is that the division of responsibilities between web engineers and screen reader developers remains extremely ambiguous. I am hoping we can make some progress on this front after we have solid reference implementations of web GUI in the ARIA authoring practices.

> In other words, they should not map to system-level widgets,
> but then not follow the expected behaviors of those system-level widgets.

Yes, yes, yes! Do not use ARIA in a way that deceives the user.

> - And do so in a way that still works for users of touchscreen devices.

Mobile is a large problem space of its own -- a rat whole I don't time to crawl into now.
> That's my thinking at the moment.

I am happy your are thinking so clearly and deeply about this. You have obviously invested a lot of time in understanding accessibility. Thank you! Thank you!! As someone who is blind, I am very grateful to you and everyone else on this list who is working so hard to make it possible for all people to benefit from your work.

Matt King