Thread: Automation? (was Label vs ALT tag on form elements)
Number of posts in this thread: 5 (In chronological order)
Patrick H. Lauke wrote:
> Again, it's the AT support that's the clincher here, and the impact this
> choice of markup would have on usability in general.
Not that I'm desperately actively searching for a project for the coming
year, but, here goes....
There are quite a few "automatic" code checkers out there. Each one is
slightly (or a lot) different. Mostly, they check our code against one
or more standards. There are also some tools out there that give you an
idea of whether your code will have difficulties being rendered by
I've been around for several discussions on this list about the relative
usefulness/uselessness of such tools, and I'm not trying to retrace old
steps. Let me just say that I'm coming from a perspective that
acknowledges the existence of such tools and some individuals' tendency
to use them to varying degrees, either as their entire accessibility
strategy or just a starting point.
Would it be helpful to have an automated checking tool that would
analyze a bit of code (i.e., a web page), and give a report about real
or potential difficulties with various assistive technologies?
For instance, the checker is scanning through the submitted code and
finds a "foo" tag with no attributes. The checker informs the user that
the "foo" tag is in use and that it is not supported by AT tools B, C,
and D, and that it is only supported by A if it has the bar="" attribute
(which is missing in my example).
I realize this would take a LOT of research and testing, preceded by
coming up with as complete a list as possible/desired of important
tags/elements to look for (labels, alt attributes, table summaries,
noscript tags, whatever), and assessing how various tools use or ignore
those tags/elements. But, if a tool like that could get us quickly past
the step of saying "I forget how such-and-such version of so-and-so does
with this kind of setup."
To anyone who just doesn't like automated checkers in the first place, I
realize you would never use the tool. I would never suggest that it be
a complete solution, but maybe a starting point in letting people who
are interested in not just producing "standard" code, but also usable
code, a tool to use to save them some scouring and help them go into a
live testing phase with fewer errors than they would have without the scan?
On 3/23/07, Michael D. Roush < = EMAIL ADDRESS REMOVED = > wrote:
> Would it be helpful to have an automated checking tool that would
> analyze a bit of code (i.e., a web page), and give a report about real
> or potential difficulties with various assistive technologies?
Here are a few initial thoughts. I certainly think such a tool would
be very valuable. We, in the accessibility field, tend to know much
less about assistive technology than we do about standards. But, as
you mention, such a project would take A LOT of work. Which AT would
you track? Would you only report on the most recent versions? What
about older versions that the majority of users may be using? New
versions come along quite often and keeping up with the
inconsistencies in them would take a lot of work.
Doing this testing would require a test suite of documents that you
could test various AT against. This set of documents could be as
valuable as the AT testing itself. I have yet to find a nice set of
document for testing AT and even accessibility reporting tools. The
W3C has the beginnings of these documents, but they really are quite
far from being useful at this point. Does anyone know of anything
My final thought is regarding whether we, as developers, should be
developing to AT or to standards. More and more, I think the field is
shifting to a mindset that we should develop to standards and that it
is the AT's responsibility to support those standards. Gone should be
the days when we violate or don't fully implement standards because AT
product X or browser Y doesn't work right when we do. As such, the
guidelines, as convoluted and confusing as they are an easier target
to hit than accommodating the many problems that exist in AT. Still,
understanding how AT works with different standards would certainly be
of value to us.
I would suggest that a preliminary step towards such a system would be to
set up a website (perhaps a wiki) that would contain short descriptions of
all existing screen readers/magnifiers/speaking browsers and that would
allow visitors to compare different items. Similar to www.cmsmatrix.org.
To my knowledge (which is limited), such a thing is not available yet.
I think it is crucial to conceive such a repository so that everybody can
contribute. In my view this is the only way in which it can be kept updated.
A first step would be to collect data on the most common AT and define a
number of features to be used in comparisons.
Once such a repository is up and running, I don't think it would take long
to develop a modular system to crosscheck AT features/capabilities to
Jared Smith wrote:
> On 3/23/07, Michael D. Roush < = EMAIL ADDRESS REMOVED = > wrote:
>> Would it be helpful to have an automated checking tool that would
>> analyze a bit of code (i.e., a web page), and give a report about real
>> or potential difficulties with various assistive technologies?
> Doing this testing would require a test suite of documents that you
> could test various AT against. This set of documents could be as
> valuable as the AT testing itself. I have yet to find a nice set of
> document for testing AT and even accessibility reporting tools. The
> W3C has the beginnings of these documents, but they really are quite
> far from being useful at this point. Does anyone know of anything
Just to give a pointer to this work, W3C/WAI is developing a repository
of such tests for the WCAG 2.0 Techniques. The aim is to provide support
for developers of evaluation tools, authoring tools, user agents, and
assistive technologies, as well as to serve as a collection of best
practices for Web developers:
As you correctly note, this is quite a big project and we are currently
only at the start of it. Please contact me if you are interested in
contributing tests or in helping to review and refine contributions (we
have a couple hundred contributions in queue right now).
Shadi Abou-Zahra Web Accessibility Specialist for Europe |
Chair & Staff Contact for the Evaluation and Repair Tools WG |
World Wide Web Consortium (W3C) http://www.w3.org/ |
Web Accessibility Initiative (WAI), http://www.w3.org/WAI/ |
WAI-TIES Project, http://www.w3.org/WAI/TIES/ |
Evaluation and Repair Tools WG, http://www.w3.org/WAI/ER/ |
2004, Route des Lucioles - 06560, Sophia-Antipolis - France |
Voice: +33(0)4 92 38 50 64 Fax: +33(0)4 92 38 78 22 |
Giorgio Brajnik wrote:
> A first step would be to collect data on the most common AT and define a
> number of features to be used in comparisons.
> Once such a repository is up and running, I don't think it would take long
> to develop a modular system to crosscheck AT features/capabilities to
That is a good wording for what I was trying to come up with in my mind.
Perhaps if I started with a generic "list" of types of things that
ought to be tested and develop a series of web pages that use those
things, then ask people to use various AT to "read" those pages and
report what they used and how the AT rendered various pieces of the page(s).
I say "pages" because I think I would need different pages for each
different doctype I test.