WebAIM - Web Accessibility In Mind

Loss Aversion and Web Accessibility

I think we can all agree that the field of web accessibility needs more good data. Many of the arguments we make for “best practices” and guideline development are not based on substantive data. And much of the data that we do have may be fundamentally flawed due to loss aversion. This is an interesting principle that can affect your own web site accessibility decisions.

What is Loss Aversion?

Loss aversion is an aspect of psychology and decision theory that refers to people’s tendency to strongly prefer avoiding losses to acquiring gains. Studies have shown that a person’s aversion to loss is twice as strong as their interest in gains.

For example, a person will lose more satisfaction by losing a $100 bet than they gain by winning a $100 bet. In the field of accessibility, the suggestion of decreased accessibility may be more significant than the suggestion of increased accessibility. This is loss aversion.

Loss Aversion and User Opinion

Loss aversion can have notable effects upon the accuracy of collected data, particularly when surveying a subject’s opinions. In the field of accessibility, most of our data is based on opinion.

We have seen evidence of this in our own surveys of users with disabilities. If you ask a user if they would prefer more web content or less web content, they predominantly indicate that they want more, regardless of the drawbacks. Subjects generally indicate that they prefer more verbose alternative text, explanatory descriptions, etc. Of course nobody wants to be left out. But sometimes users have difficulty telling you what they actually want. Actual user testing often indicates that people with disabilities really prefer efficiency over verbosity.

In short, there may often be a disconnect in accessibility between what users indicate they want and what they actually prefer.

This is NOT to suggest that we should ignore the opinions of users with disabilities. But we do need to consider the impact of loss aversion when we create recommendations or prescribe “best practices” based on those opinions.

Accessibility Implications

“It is better to give blind users more information than less.”

This is something I’ve often heard, and accessibility guidelines generally prescribe as much. This idea can cause authors to provide verbose descriptions when considering alternative text, for example. Authors often provide alternative text (rather than alt="") for images that do not convey useful content. We sometimes see off-screen, screen reader-only text used to provide instructions and descriptions for page elements that are already fully accessible. Long description pages for images, table summary attribute values, title attribute values, ARIA labels/descriptions, etc. often include additional information just for screen reader users.

Are these approaches for providing additional accessibility information really best? Or are they based on loss aversion?

WCAG Techniques for Alternative Text

As an example, WCAG techniques for providing alternative text for images with ARIA labels, title attributes, and <figure>/<figcaption> are currently being considered. Despite providing efficient image alternatives, adequate accessibility, and meeting the normative requirements of success criterion 1.1.1, these techniques have generally been poorly received in favor of traditional methods.

Alternative techniques have been suggested that would require duplication of alternative text – typically once in a label, caption, or adjacent text, and once in the alt attribute on the image. There have been suggestions that references in the alt attribute to the actual location of the alternative text (e.g., alt="The image content is found below.") are required for conformance (I believe presenting anything other than alternative text in the alt attribute to be misuse).

The techniques above are based on the notion that users always prefer knowing that content is presented in images over simply being presented the content once in text. And this notion is based on users’ opinions that they want more information, not less. And these opinions might be based on loss aversion.

Do users really prefer duplication of content to support image detection over content being presented efficiently? Is it possible that these new WCAG techniques to support ARIA and HTML5 may be rejected because of deference for opinions that may be based on loss aversion and the notion that more really is better? Are there other areas where loss aversion may be prescribing techniques that are not really optimal for end user accessibility?

Addressing Loss Aversion in Accessibility Data

Consider the following question posed to screen reader users:

Do you prefer that the presence of images be identified even if this results in redundancy?

I suspect that most screen reader users would answer “Yes”. They would not want to lose the information about the image being present. Loss aversion?

What if we instead asked:

Do you prefer that alternative text be repetitively duplicated or do you prefer that it be presented efficiently?

I think users would predominantly indicate that they prefer efficiency.

The problem is that current techniques can’t really support both opinions – users don’t have an option to avoid the repetition when reading the page. It is forced upon them. It is, therefore, important that we prescribe techniques that result in optimal accessibility.

The Need for Better Data

Of course the best approach is not to simply ask users what they think, but to actually test users in real-world environments and situations to determine what best affects their experience. Good user testing and data collection is largely an untapped area in web accessibility. As guidelines and technology evolve, and as users with disabilities become more savvy, it will be vital that guidelines and best practices be based on substantive user data.

Comments

  1. AlastairC

    I’m not going to disagree with the need for data, however, my arguments against relying on figcaption as alt-text was not to be in favour of duplication.

    I’ve found authors tend to assume people can see captions, and write things that are not suitable as alt text, at least for people who can’t see the image.

    Take a news story as an example: http://www.bbc.co.uk/news/uk-25973344

    The caption assume you can see it: “At Muchelney in Somerset, supplies including post have been brought in by boat for several weeks”.

    The alt on the image does not: “A rescue worker wades through water carrying a box full of letters”

    For authors who know how to provide alt text, it is not the same as a caption. For authors who don’t know how to provide alt text, you won’t overcome the problem by using figcaption as the alt.

    On the broader point, changing the question just biases people the other way. You actually want to test this in a way that the user doesn’t know what the test is about and look at behaviour, not opinion. For example, a split-condition test where we present two different pages and ask people to score the quality of content in each, not specific to alt text.

    In one condition you use minimal alt text for the first page, redundant alt text on the second page. In a second condition (with different participants) you use different versions of the same page, with the alt-text difference the other way around. That way you balance for type of content and order effects.

  2. Jared Smith

    Alastair-

    Thank you for the comments. You are correct that figcaption often does not provide good alternative text. But this doesn’t mean that figcaption CAN’T provide good alternative text.

    Consider a photo of me in a web site with my name immediately after it. Figure and figcaption provide a very efficient and accessible way of associating the caption to the image itself (and indeed, this is the ONLY way using standard HTML to create this association). The caption presents the full alternative text of the image.

    Why preclude this technique when it fully meets the normative success criteria for alternative text?

    Because it won’t always be used correctly? If this is the case, then we should not have any alt attribute techniques either, right?

    The fact is that authors are already doing this all the time – it’s part of the HTML5 spec. I think WCAG should provide guidance for techniques for things that can provide full conformance rather than exclude them because there’s a chance author’s might do them wrong. Yes, there are times when caption != alternative text, but without guidance, it’s more likely they’ll continue to mess up.

  3. AlastairC

    Hi Jared,

    I agree with your scenario (using null alt text to prevent repetition), however that isn’t specific to images in figures, that’s the same for any image with duplicate text next to it.

    It is very similar to technique H2: http://www.w3.org/TR/WCAG20-TECHS/H2

    I’m not saying it should be invalid to use figcaption as alt-text, but that the validity comes from how you consider the alt, NOT that figcaption is suitable as a replacement to alt-text.

    The impact of looking at it that way around is that you would intentionally add a null alt, rather than just missing it out (which is what the HTML5 spec says).

    I would rather the test for this be:
    1. Is there suitable alt text? No?
    2. Does the adjacent text (or perhaps programmatically associated text) fulfil the alt text’s purpose and there is a null alt? Yes – fine.

    But that applies for any markup, not just figcaption.

    Having this one exception to alt text just makes it a harder sell (thinking from a training point of view) as you have to go through when that could apply. I would preclude the technique because it very rarely (in practice) meets the success criteria for 1.1.1.

    I’m sceptical that authors are “doing this all the time”, at least due to being informed by the spec. I wouldn’t be at all surprised if people add captions and forget alt text, but that’s just because they would generally forget alt text.

    And that’s the bottom line: If people aren’t thinking of alt text in relation to their captions, it will not be alternative. It won’t meet SC 1.1.1.

  4. Jared Smith

    Alastair-

    I agree with everything you’ve posted. We have seen these situations in the wild, though they are not common (yet?).

    The question is really whether it would be useful for these types of things to be documented in WCAG techniques. As it currently stands, there’s no clear guidance about the usage of figure/figcaption, etc. And while authors will always neglect good alternative text regardless of the implementation, having guidance and documentation about proper and improper usage is more likely to influence proper behavior than utter silence.

  5. AlastairC

    Hi Jared,

    I’ve recently joined WCAG committee and I’m (trying!) to get some tasks & techniques done. Figcaption for SC 1.1.1 was my first, and one I got stuck on due to not really agreeing with it in principle.

    However, I’ve a few to add under SC 1.3, where I think img/figure/figcaption are very suitable, so hopefully we can get some of those out the door in the not too distant future.

    As a newbie in that committee it is obvious there is a very structured process that is being observed, I’m still feeling my way. There does seem to be a tendency to fit techniques into a particular pot (e.g. HTML or CSS, but not both), which I haven’t got to the bottom of yet.

  6. Olaf Drümmer

    I find the concepts being discussed in this article interesting, but I am struggling to see its applicability outside the topic of providing alternate text. Accessibility is a much broader subject – where else, outside of Alt tag and friends, would loss aversion kick in?

  7. John Foliot

    Long description pages for images, table summary attribute values, title attribute values, ARIA labels/descriptions, etc. often include additional information just for screen reader users.

    Longdesc? Did somebody say longdesc?

    If you’ve spent more than an hour working in the web accessibility space, you probably know how I feel about longdesc (it is more valuable than many will be prepared to admit).

    Longdesc in particular deals with one of the problems Jared is mentioning: the disconnect that loss aversion creates which prompts users to say they want more, yet who actually prefer less. The brilliance of longdesc, they most interesting functionality it brings to the table, is that it allows the author to actually provide both! You can provide your succinct alternative (text) to the image and avoid the overly verbose run-on by putting that additional information (the verbosity) in the separate document and then *PROVIDE THE END USER THE CHOICE* of whether or not *today* they want to hear the more, or the less. No other native element, attribute or technique available to us today allows for that choice mechanism (at least not as neatly as longdesc does in screen readers that support the attribute).

    Before anyone starts to re-open the longdesc debate (you aren’t going to change *my* mind), I will admit that yes, longdesc has had a rocky and uneven life. But the failure of AT and the browsers to do something brilliant with longdesc is not reason alone to abandon longdesc, instead if we continue to use longdesc, it provides the incentive for AT and the browsers to improve, not ignore, this valuable mechanism.

    Jared, I generally agree with what you are suggesting, but I think the problem lies, not with the content creators, but rather with the tools: the end user should be able to navigate and choose what they do and don’t get (and when), whether through modifiable but persistent settings (browsers *should* have a setting that allows the end user to override HTML5’s autostart attribute, screen readers *should* allow the end user to have a more finally ‘tunable’ verbosity matrix (always announce parenthesis, but keep the general verbosity level low), or through more readily available contextual help prompts. Perhaps Benetech’s work around communication of accessibility metadata via schema.org will provide some insights and a path forward (http://www.a11ymetadata.org/)

    More is not bad, how we (and the end user) manages more is the problem that needs fixing.

  8. David Sloan

    Jared, this is an excellent and thought provoking post. I see it’s real value not just in revisiting fine-grained accessibility issues, but also in the wider challenge of how to provide a better user experience across a page or series of pages which might be affected by the cumulative effect of the presence of an accessibility problem (or a poorly implemented solution) at several points in a user journey. What works in isolation might become irritating over time.

    The concept of loss aversion mad me think of the related observed phenomenon of people optimistically report on their experience, perhaps in order to avoid upsetting the researcher, who they assume will be upset to hear criticism, or where they believe the interface being tested is so close to completion that “it must be OK, and me criticising it would be pointless as they wouldn’t fix it anyway”. Or perhaps participants are just reluctant to admit their own failings in task completion. This is something that HCI studies involving older, less experienced participants, have reported, and emphasises the need for multiple data gathering methods.

    Another argument to force us into new, better ways of thinking about and doing inclusive user research.

  9. Gian Wild

    We often recommend additional information for screen reader users and it is based on solid research, such as that conducted by Russ Weakley and Roger Hudson on hidden structural labels for navigation.

    And we also require SUMMARY on data tables (as well as CAPTION) because the research done by the Australian Human Rights Commission indicates that it is being used by screen reader users. Even with coded data table headers, data tables are very difficult to navigate for a screen reader user and therefore having information in the CAPTION (ie the title of the data table) and the SUMMARY (ie information summarising the contents) is useful to these users.

    However I have also come across vision impaired users who want to know the colour of the bikini the girl is wearing in a beer ad…

  10. Jared Smith

    Thanks David and Gian. We often design interfaces for a user’s first experience with a web page or application. Nearly all testing is conducted with a user encountering and orienting with an interface for the first time. These initial interactions suggest that more information may be helpful.

    I think, however, that sometimes, particularly with web applications, that it may be better to consider the user’s 1000th interaction with the web page. In that situation, efficiency and lack of redundancy become an important part of accessibility and usability.

  11. Dey Alexander

    The field of accessibility really needs ‘usability testing’ (task-based testing with observations of user behaviour). Methodologies need to be rigorous, and fully reported so that the validity and reliability of the data can be determined and we can act with confidence, or know when not to act.

    Almost all of the testing that happens is ‘user testing’ (directed task steps rather than complete tasks, with facilitation that is often closer to instruction, and often focused on whether a user’s AT can access and interact with an interface, rather than uncovering real user behaviours).

    Jared, I agree, we’d have better accessibility guidelines if we had some read data to work with.

  12. Ramón Corominas

    Hi, Jared, Allistair.

    The “alt” vs. “figcaption” thing is far more complex than just considering if the figcaption is a good alternative or not. Suppressing the image information is not just suppresing its text, is also eliminating awareness of the user about the presence of an image, and even worst, eliminating the possibility of interacting with the image. Thus, the screen reader user will not be able to, at least:

    1) Download the image or save it in the camera roll (this is something that I do very often in my iPhone, being a low vision user, and some of my totally blind colleagues also do).
    2) View the image in a separate window/tab to zoom in

    If you consider this as “loss aversion”, maybe it is in certain cases, but not for others. For example, in a photo gallery with captions you are limiting the real possibility that the user wants to download images for further sharing.

    Even in the case of your pic… Suppose that I am attending an event that you will also attend, and I want to meet you during the coffee break. Maybe the easiest way is to send your pic to my sighted colleague that will accompany me at that event, and not just saying “look for Jared Smith”. The loss is not just the text, is the image data itself.

    In any case, you can test this “loss aversion” very easily. Just suppress the image. Completely. If you consider that the text is a good alternative, suppressing the image will not make any difference for sighted users, isn’t it? Or it is “loss aversion, too?

  13. Jeremiah Rogers

    I think users’ preference for information that is concise (not repetitive) may be as much because blind users have had such a minimalist web for so long. I’ve been totally blind since birth, and have been a web surfer since the mid 90s. Only recently have I begun to understand just how busy the web is, and for some time has been, for sighted users. Since my web has always been either concise or inaccessible, I’ve only a slowly emerging clue just how much noise I might’ve been without. So while the point about loss aversion is very valid, I can’t help but think that user preference might be based as much in a lack of knowledge about the truly noisy web as in a preference for clean accessibility.

  14. Mike Elledge

    Thanks, Jared, for raising this issue. To me this is a question that we usability folks encounter often, which is, how much information is needed by our users to successfully complete a task. Generally, the objective is to provide the minimum needed so that the page remains uncluttered. The amount of information needed, however, will vary according to the experience the person has with the Internet in general, and the site in particular. Some people will have never visited the site, others may use it everyday. How, then, to accommodate the varying levels of experience? In our sites we provide additional information through “i” links, that can be clicked on or tabbed past. The benefit of this approach is that the user is able to obtain additional information when they need it, and ignore it when they don’t. I wonder if a similar approach was discussed for HTML5 but then discarded…