WebAIM - Web Accessibility In Mind

E-mail List Archives

RE: Flash and Checkpoint 1.3

for

From: Steve Vosloo
Date: Aug 15, 2002 11:22PM


Thanks John.

>*Most* users requiring descriptions of visual files (gifs, jpegs, pngs,
streaming video, flash animations, etc.) are probably using a "user
agent" which *IS* reading out >loud text (JAWs, IBM HPR, etc.), so
providing a text file which storyboards the actions (or including the
action information as part of the overall script file) would >probably
suffice, but the guideline does not address this possibility.

<more opinion>
I totally agree. The guidelines should be divided into "real world" and
"ideal world" suggestions. The site I'm dealing with has thousands of
Flash movies. They're all complimentary to existing information -- it's
a school site. So a user (student) of the site reads the HTML text and
then sees a Flash animation of the theory. If she was reading a book,
she'd see a diagram. The web is interactive so we're able to offer an
animation. But the point is that without the animation the student would
still be supported by the other channels -- text and images. In this
spirit, and acknowledging that providing auditory descriptions of
thousands of Flash movies is prohibitively expensive, I think a text
description is a good 2nd option.
<more opinion>


-----Original Message-----
From: John Foliot - bytown internet [mailto: <EMAIL REMOVED> ]
Sent: 15 August 2002 01:52 PM
To: <EMAIL REMOVED>
Subject: RE: Flash and Checkpoint 1.3


Steve,

I'm still trying to get my head around WCAG Priority 1 Checkpoint 1.3:

"1.3. Until user agents can automatically read aloud the text equivalent
of a visual track provide an auditory description of the important
information of the visual track of a multimedia presentation."

Am I correct in saying that there is not one single site out there,
which has a Flash movie on it without an accompanying auditory
description of the movie, that is compliant with W3C level-A?


If you subscribe to strict adherence of all WCAG checkpoints (Bobby
Fans!), then, yes, you are essentially correct.

The spirit of the checkpoint (by my interpretation) is that when
presenting a multi-media presentation that the essential information is
conveyed for/to all users. Thus an audio narration requires a text
transcript for those who cannot access the audio content, and a visual
presentation must be "story-boarded" for those who cannot access the
visual presentation. Like a storyboard, not every single motion or
action need be documented, but the key, essential actions must be
delivered to the end user ("... description of the important
information..."). The checkpoint states that this "storyboard"
treatment must (should?) be made available as an audio track. Taking it
to the MAX, the description should probably be presented in both audio
and text formats; my personal concern is that it may in fact reach a
point where the content developer is delivering too much information
simultaneously, causing "brain-overload" at the user end. I suppose the
ultimate answer is to provide the media in multiple
formats/configurations, allowing the end user to choose the delivery
options which best suits their particular needs. I further suppose that
something of this nature could be developed using Flash, but it would
probably involve a fair bit of development work.

The National Center for Accessible Media (Media Access Group of WGBH /
Public Televison) has a showcase page with various streaming media
examples, and may be found at:
http://ncam.wgbh.org/richmedia/showcase.html

<opinion>
Many content developers are faced with the daunting task of using the
often vaguely worded concepts outlined in the WCAG guidelines as
Standards, even though they were never written in the language of
Standards. The spirit of this checkpoint is a good one, but the
practicality of it poses serious developmental considerations. Using
SMIL (a W3C approved technology - guideline Priority 2 - 11.1) allows
the simultaneous inclusion of text and audio/video (a.k.a. captioning)
but the inclusion of a second (optional) audio track which runs
concurrent with the main presentation is problematic at best. *Most*
users requiring descriptions of visual files (gifs, jpegs, pngs,
streaming video, flash animations, etc.) are probably using a "user
agent" which *IS* reading out loud text (JAWs, IBM HPR, etc.), so
providing a text file which storyboards the actions (or including the
action information as part of the overall script file) would probably
suffice, but the guideline does not address this possibility.
</opinion>


My initial idea of providing an alternative text-only description is
apparently not sufficient. What's a developer to do? Remove the flash
movie or find the nearest sound recording studio?


Mic test... testing one, two, three, check.

Good Luck

JF


Thanks
Steve

Steve Vosloo
Division Manager
Usability Jun