WebAIM - Web Accessibility In Mind

E-mail List Archives

Re: [EXTERNAL] Graphical element instead of the role link presence. How should I report it?

for

From: Mark Magennis
Date: Mar 4, 2021 5:33AM


Thanks for injecting this Steve. I think it's very useful to discuss approaches to testing and it does seem relevant to this discussion.

Despite having said the polar opposite, I understand your point and yes, if starting with an automated tool does negatively affect the quality of a subsequent manual testing then that is a problem. I haven't observed this in practice but that may be because I haven't sufficiently studied it closely enough.

However, for most of those in my company who do accessibility testing, I think starting with aXe has a lot of benefits. All members of the development teams - UX, Engineering, and QA - are expected to cover accessibility, but in practice this is only to the extent that they are able. We use automated testing as the minimum and expect/hope some degree of manual testing will follow on from there and that's where we run into limitations. Engineers don't have time to do much testing but we ask them to at least run aXe and fix Violations before handing over to QA. We also ask them to try the functionality with the keyboard only and check what the screen reader reads, though I know a lot of them don't do that. Even if they do, we run into the problem where they don't really understand how it *should* work with the keyboard and what a screen reader *should* read, so their manual testing is not always effective. QA are expected to do much more in depth testing, as is normal in their role, and hopefully develop accessibility knowledge over time. We still recommend they start with aXe, if only to check that there aren't any Violations that Engineers should already have found and fixed. But with manual testing, QA run into the same problems as Engineers - not having sufficient understanding of how something should work for it to be considered accessible. They have a process and a lot of supports (defined testing process, a11y user acceptance criteria, learning resources, etc.), but they often still have insufficient time to learn and put their learning into practice. If they were able to use all of the resources and do the full manual QA check to a high degree of expertise, then there would be little need for them to use an automated tool. But in practice, some QA Engineers are entirely new to accessibility, most have very little time to devote to a11y QA (let's face it, the overhead is enormous), some squads or product owners are just not sufficiently bought into it to allocate the time, and some QA staff just don't get it or don't have the level of interest for it to spark their curiosity and wish to learn more. Very few have ever sat down with real users with disabilities or observed user testing for example, so their understanding of what we are trying to achieve is often very limited. We do use offshore QA who have a lot more a11y experience and they can follow the process much deeper, but they are still limited in their understanding. The specialists in the Accessibility Office are the ultimate testing and knowledge resource but we are painfully few and very stretched.

Given this, we find that getting the dev teams to at least use the automated tool and think about what it is telling them is a very good start and often as far as we will get. But it does give a good platform for training/mentoring discussions. The Accessibility Office runs regular a11y workshops for UX, Engineering, and QA and the teams often come with questions around "aXe is saying this but I don't fully understand it". I find that's a really good start for conversations that will help to upskill them.

The Accessibility Office can do a complete manual audit without needing automated tools, but as you say it's useful as a check to see if anything was missed. However, I personally always start by running aXe and seeing what it says. I don't really know why I start with this. I didn't used to but I've grown to like it. Maybe it just feels like it gets me going easily. I don't feel it affects the manual testing I do and yes, it does misdiagnose a fair amount but I can understand that. So for specialists maybe it doesn't matter but for people strapped for time or with little knowledge then I feel it can be useful to start with automated tools.

I think I'll pay more attention in future to looking to see whether it might hamper subsequent manual testing though because I can see your point.

Best,
Mark