WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: Looking for techniques for accessible maps

for

From: Dianne V Pawluk
Date: Dec 6, 2013 8:28AM


Hi Whitney and Robert,


I apologize for taking so long to get back to you, especially Robert, but
this term has really been busy for me.


As Robert knows, I’m a researcher who does work in developing assistive
technology for tactile graphics. I have had 2 grants in the past and a
current grant from the National Science Foundation working on tactile
graphics.


My current work is on automating the process of converting visual diagrams
into tactile diagrams. My previous work was developing effective
representations for tactile diagrams and affordable devices that could act
like computer peripherals to virtually interact with diagrams in
real-time. The overall idea is to automate and provide effective and easy
access to visual diagrams for individuals who are blind or visually
impaired without the need for intervention by a sighted person.


First, let me talk about representations. The issue of how to represent
information is very important. Currently the most common method of
presenting information is using raised line diagrams. Unfortunately,
studies have shown that even when used to identify common objects that the
ability of individuals to use these representations is very poor –
approximately around 30%. Therefore my laboratory group, as well as
another group (Thompson, Chronicle and Collins), have separately looked at
more effective methods of presenting information.


Thompson, Chronical and Collins (2006). Enhancing 2-D Tactile Picture
Design from Knowledge of 3-D Haptic Object Recognition. European
Psychologist. 11 (2), 110-118.


Burch and Pawluk (2011). Using Multiple Contacts with Texture-enhanced
Graphics. World Haptics Conference, 287-292.


We both had the idea to use texture to encode information that is difficult
to interpret through touch. Processing geometric information is done
serially (i.e., with one finger following along a line) rather than in
parallel as in vision. This makes it difficult to interpret what the lines
mean (i.e., they may not be an outline of an object part but instead be a
line indicating some detail or it may indicate perspective). In vision, as
multiple lines are seen together, the constraints can be solved
simultaneously, but touch, being serial, relies on it being solved through
memory which is much more difficult.


Both groups use texture to encode information about part (i.e., a different
object part would get a different texture) and about 3-D orientation (i.e.,
different 3-D orientations would be indicated by different 2-D orientations
of textures. We both found significant improvement with this new method.


However, our group believed that there was another significance to using
texture. Psychologists (particularly Lederman and Klatzky) have shown that
touch is more effective in interpreting material properties than geometric
information. In particular, when searching for objects placed across
fingers, they found that parallel processing (i.e., information to all
fingers could be processed simultaneously) occurred for material properties
(such as texture) but not geometric properties. We wondered if this would
hold for interpreting tactile graphics. David’s work (above) showed that
not only did our texture encoded information improve performance over
raised line drawings. It improved even more when multiple fingers were
used (this was untrue for raised line drawings, where it did not matter if
one or more fingers were used)!


This is why I would strongly recommend that fields, etc. be given textures
unique to that item (i.e., different crops may have a particular texture,
border spacing another texture, etc.). This will allow for quicker
processing of information than just using raised line drawings. We have
actually recently implemented this idea for capsule paper for a Botanical
gardens in Richmond, and it seems to be very effective. The textures we
chose are from a compilation by Lucia Hasty (www.tactilegraphics.org), from
experimentally assessed textures, to pick textures that are actually known
to be distinguishable from each other. We’ve had to make some adjustments,
as some textures did not feel as unique as we thought they would. This
could be used in combination with something like a talking pen or the
talking tablet, to provide text information at certain points.


You may also be interested in the fact that we have also developed
affordable devices that can allow exploration of a diagram, such as a map,
to happen interactively with a computer. David’s devices are shown in his
paper. These are small devices that wrap around the finger. They pick up
the color from a video display that presents a color version of the
textured graphic. The finger devices know by the color which texture to
present, which they do so through a vibrator. One can use as many devices
on as many fingers as one wants, but two to three devices seem to work best
(we have not submitted these results for publication yet).


Another affordable device that we made in the laboratory is a mouse-like
tactile display. This consists of a Braille cell mounted on a hollow mouse
case and used on top of a graphics tablet (although a touchscreen would
work just as well). Robert, this may seem similar to the VT Player by
virTouch, but there were some fundamental problems with their design that
we corrected. First, they used regular mouse technology which does not
produce accurate position information, in fact it can be horribly
inaccurate. To see this, rotate a regular mouse to the left about 45
degrees and move it straight vertically: you will find it moving to the
corner of the screen rather than straight upwards. Also, the point of
rotation was at a very different place than the pins: if the mouse was
rotated, the VT Player did not detect this and so did not compensate for
the different position of the pins as the rotation point was still the same.


One of our papers on this is the following:

Headley, Hribar and Pawluk (2011). Displaying braille and graphics on a
mouse like tactile display. ASSETS 2011.


As you can see, we found an effective way to present both braille and
graphics with the same display, and have them easily distinguishable.


For this mouse like device, we developed a method of controllably
generating distinctly different textures , but we have not evaluated its
performance with picture identification or in comparison to our other
device.


Headley and Pawluk (2011). Roughness perception of textures on a haptic
matrix display. Wourld Haptics, 221-226.


We have also, in recent, work, combined the use of tactile feedback through
a touchscreen with text feedback for what we describe are points of
interest for an electronic version of the garden map.


Another alternative, which we are planning to evaluate in comparison to
tactile feedback is the use of audio feedback called sonification (the use
of nonspeech sounds to relay information just like we would with texture).
We believe that the disadvantage with this is that one would not be able to
use multiple fingers, which we did show improved performance significantly.


However, one study I did with one of my students, Ravi Rastogi, which we
are preparing for publication is looking at the use of tactile and
sonification feedback for maps that have more than one set of features.
Actually the maps we used were crops (although mangos, etc.) and rainfall.
With sight, it is relatively easy to overlay this information without being
overwhelmed by it. We look at using: two very distinct sets of tactile
textures, two very distinct sets of audio feedback, and one set of tactile
feedback with one set of audio feedback. The reason for this comparison is
that cognitive load theory proposes that each sensory channel has a finite
capacity for working memory. Thereby, we reasoned, by spreading the load
across two modalities, we would improve performance. In fact that is what
we did find.


The main part of Ravi’s work was looking at methods of interactive zooming
and simplification.


Rastogi, Pawluk and Ketchum (2013). Intuitive Tactile Zooming for Graphics
Accessed by Individuals Who are Blind and Visually Impaired. IEEE
Transaction on Neural Systems and Rehabilitation Engineering.


Rastgo and Pawluk. (2013). Dynamic Tactile Diagram Simplification on
Refreshable Displays. Assistive Technology, 25.


Both of these are potentially applicable to agriculture map information.
With zooming, we were looking at the issue that we did not think that using
visual zooming methods were appropriate. With many visual zooming methods,
the zoom that occurs with a button click is often not appropriate: either
there is not much of a change from previously or the zoom inconveniently
cuts through information that should go together. This is easy to correct
visually as a person can quickly glance at the result and zoom back out to
the level desired. This is not so easy to do tactually, as the process of
exploring a map tactually is a much slower process. We developed an
algorithm that uses the conceptual organization of information to scale
between levels appropriately. Thus, saving time.


With simplification, we looked at the issue of what if there are a lot of
different features that would be desirable to present on a map.
Unfortunately too much information on a diagram makes them very difficult
to interpret tactually: and remember that the amount of information
presented in a normal visual diagram is too much, as the visual and tactile
systems are different in their processing abilities. Also, unfortunately,
currently, it is the sighted maker of the tactile graphic that decides
whether information should be included or not. It is based on the user’s
intent, but what if that intent changes? A user would have to go back to
the maker and ask them for another map. We looked at a method where
different sets of features would be on different “layers”,
indistinguishable in the final map but which the user would be able to
remove, add and combine at will. Thus, all the information is available to
the user, but they can select what they want at that particular moment, to
avoid cluttering the diagram. We found that this did help a lot. We also
looked at presenting borders, as straightline borders are easier to track
than meandering ones – again we allowed users to have a choice. The idea,
was that they could use the simplified method to get an overview and then
the more detailed method if needed.


I hope I have not talked to long. Whitney and Robert, if you have any
questions or Whitney, your colleague has any questions, I would be happy to
answer them or discuss things over the phone.



Sincerely,

Dianne Pawluk

Associate Professor

Biomedical Engineering

Virginia Commonwealth University


On Sat, Nov 16, 2013 at 3:52 PM, Whitney Quesenbery < <EMAIL REMOVED> >wrote:

> Thanks Dianne. Is there a web site for your project that I can point my
> friend to?
>
>
> On Sat, Nov 16, 2013 at 3:13 PM, Dianne V Pawluk < <EMAIL REMOVED> > wrote:
>
> > Hi Robert,
> >
> > I realize now that I forgot to respond to you about our project - I
> wanted
> > to find the time to give you a complete response and then got burdened
> with
> > work.
> >
> > I will be happy to talk to both Whitney and yourself about her problem,
> and
> > I hope to gather information for you on my tactile graphics projects
> soon.
> >
> > Sincerely,
> > Dianne Pawluk
> > Associate Professor
> > Biomedical Engineering
> > Virginia Commonwealth University
> >
> >
> >
> > On Sat, Nov 16, 2013 at 1:56 PM, Robert Jaquiss < <EMAIL REMOVED>
> > >wrote:
> >
> > > Hello:
> > >
> > > How about downloadable files that can be produced with image
> capable
> > > embossers or capsule paper?
> > >
> > > Regards,
> > >
> > > Robert
> > >
> > >
> > >