WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: WCAG 2.1 SC 1.3.5 Identify Input Purpose - Testing Methodology

for

From: John Foliot
Date: Sep 26, 2018 8:04AM


Hi Avik,

Having been quite involved in the evolution of this SC, I can offer you my
(rather in formed) opinion. (Apologies in advance for the length of this
email)

Understanding the purpose of this SC is the first important step. The clue
is in the numbering: it is a 1.x.x SC (perceivable), as opposed to a 2.x.x
SC (operable) or 3.x.x SC (understandable). The goal of this SC is to make
those inputs *capable* of personalization, so that people with different
types of cognitive issue could "transform" (and be careful there, that is a
broad and loosely defined concept) the inputs (and labels) into different
presentation modes or modalities. The easiest possible solution there would
be to transform text labels into icons/symbols for users with reading
issues or who use symbol-based communication as part of their personal
communication strategy, *so that they can perceive the purpose of the input*
.

So the goal is to "tag" the inputs with a common, tightly defined token
value, so that the purpose of the input can be machine determined. To
accomplish that, we need to have a schema of metadata values (the
unambiguous token values) that can be attached at the element level: *in
other words, at the highest level we needed to attach a specific,
previously defined token value to the element, using an attribute.*

Form inputs can take a number of different attributes already: type, name,
id, aria-*, class, etc. However, almost all of the current attributes that
can be attached to an input element either take a string text value
(name="segundo nombre" , id="1d7rw9") or boolean values
(aria-required="true").

What we needed however was an attribute that instead took from a fixed and
defined list of token values, yet at the same time, the W3C group (AGWG -
Accessible Guidelines Working Group) did not have the mandate to 'invent'
our own new attribute or taxonomy. (I won't go into the why of that, but we
couldn't).

So... At one point, there was an exploration of using the microdata syntax (
https://www.w3.org/TR/microdata/) and values found at Schema.org, but we
discovered that there weren't all of the taxonomy values we needed at
schema.org at this time, and the authoring syntax of microdata is somewhat
'heavy'. Similar investigations also looked at using RDFa and other
metadata markup techniques.

But one possible solution, one that already had all the piece-parts we
required, was the somewhat newly minted autocomplete attribute from HTML5.

It was an attribute intended to be added to form inputs. It only is
'allowed' to take from one of 40+ token values, and each of those token
values was already unambiguously defined, so that machines "know" without
question the purpose of the input, even if it's accessible name - the label
- is more ambiguous, or is written in a non-English language. As the added
bonus, that attribute *ALSO* has some additional machine
'translation'/functionality today: because the browser knows beyond any
doubt what the input is for, it can now offer up a proposed value string to
insert into the input field - the "autofilling" part.

And so while the autofilling functionality appears to be the value outcome
of using the autocomplete attribute, the real value *with regard to this SC*
is actually at a slightly higher, conceptual level: we're now tagged the
inputs with clearly defined taxonomy terms that machines can further act
upon, whether that's provide hints and the ability to autocomplete the
input, or unambiguously convert a text label to an icon.

Do we have the tools today to do all that? Sorta.

Of course, the fact that browsers do *something* with the metadata
taxonomical term that is the token value used with that attribute, it was
sufficient evidence of a machine-readable value transforming the element
(the input goes from blank to filled) and was thus enough to be published
as a SC. However, there are also a few experimental browser extensions that
perform *other* functions beyond just autofilling the inputs, and it is
anticipated that now that web content will start to adopt the metadata
schema that *is* the autocomplete token values, that other tools will start
to emerge that takes advantage of that fact.

Today then, the principle technique for meeting SC 1.3.5 then is to use
autocomplete. However, in the future, the door has been left open that
other similar metadata taxonomies could also stand in. (Conceptually, think
of it this way: the original method of adding a textual alternative to an
image is to use the alt attribute, but today we can now also use aria-label
and aria-labelledby to achieve the same functional outcome. For SC 1.3.5,
@autocomplete is the "alt attribute technique" for meeting this SC today,
but in the future we may have alternative "aria-* -like" mechanisms as
well.)

You also asked about testing.

While it is true that native support in browsers is not complete against
all of the values, there are also a number of password tools that have very
good support for the "autofilling" bit (see:
https://www.w3.org/WAI/GL/wiki/WCAG_2.1_Implementations/JF/research). But,
more importantly, from a testing perspective what we are looking for is the
*condition*, as opposed to a specific outcome. In other words, the criteria
for pass or fail isn't based on functionality today, but rather via code
inspection: has the form input been tagged with an attribute that takes a
fixed token value that has been previously defined? (Once again, when we
test for SC 1.1.1, yes, we start by looking for alt=, but we cannot stop
there, as today we also test for aria-label or aria-labelledby). And while
it is true today that the preferred technique for SC 1.3.5 is to use
autocomplete, that isn't the end of the testing: the question you need to
ask is: "*Is this input tagged with a previously defined metadata term?*"
That takes code/DOM inspection.

The long term goal of Personalization - the holy grail for people with
cognitive issues - is to start adding additional element-level metadata to
our code.

But that is something of a chicken and egg problem, because we don't have
any tools today that could act on the addition of that metadata, so no
authors are adding the metadata; and so even if there were tools, there is
no content. But because there is no content, there are no tools (and 'round
and 'round we go). This is no different than when (back in the day for us
old timers) we struggled with using CSS (due to lack of browser support
back then), or the early days of using ARIA (when again, there was little
to no support) - somebody has to blink first. And so while this SC is very
limited in scope, it is also preparing authors with the notion that to meet
COGA requirements, start getting ready to be adding element level metadata
to our code - like you currently do with form inputs and @autocomplete.

And so Avik, simply put, to test this SC today, do a code inspection on the
form inputs looking for the autocomplete attribute. If there is no
autocomplete attribute, then more closely examine the input element - does
it have another attribute that is using a fixed token value? If yes (highly
unlikely today, but in the future..) then it will pass. But for the
foreseeable future, I personally am anticipating seeing either autocomplete
(Pass) or nothing (Fail).

(As an ancillary thought, with more forms using *all* of the autocomplete
attributes, perhaps browser will expand and improve their native support
for actually autofilling the forms - the "if we build it they will come"
approach.)

HTH

JF
--
*John Foliot* | Principal Accessibility Strategist
Deque Systems - Accessibility for Good
deque.com