WebAIM - Web Accessibility In Mind

E-mail List Archives

Thread: DHS Trusted Tester_Testing support/input

for

Number of posts in this thread: 5 (In chronological order)

From: Claire Forbes
Date: Thu, Aug 29 2024 8:46AM
Subject: DHS Trusted Tester_Testing support/input
No previous message | Next message →

Hello everyone, I'm hoping for some input ...

As a DHS Trusted Tester (completed in 2020), I test online courses from the end-users' experience - I'm provided a sandbox link for testing content, which is where a course lives until it's reviewed and deployed as live. When a course is live, participants are provided a weblink to a course, or they are enrolled in a course within a learning management system that lives on a website. I'm being criticized for testing these items as Web testing; being told I should be testing them as software since they are developed in Captivate, "Captivate Courses are not published as web products and therefore should be tested as 'Software.'"
... how is this a logical stance, or am I completely missing something here? It's quite possible, as I'm not a developer and have no experience with Captivate.

I'm also being called out, and the DHS Section 508 Compliance Test Process document referenced, for failing headers not being recognized by ANDI - there are 100% "visually apparent" headings on the page that ANDI does not detect and ANDI sates "No headings, lists, landmarks, or live regions were detected." To me, this means the headings aren't "programmatically identified," as they should be based on section 9.2 Web: Section Headings, item A.
So I'm being told "The ANDI tool is noting that there are no visually apparent headings on the page; It is a message stating a condition only. Note this message is not identifying a non compliance issue; this message is not red, yellow or orange which it would be if it were a compliance issue. For Section 9.2, the table indicates that this Test ID DOES NOT APPLY (DNA) "... if there are no visually apparent headings on the page..."
However, this is not at all how I've interpreted the DHS test step and failure conditions for "visually apparent" headings. Any thoughts?

Lastly, I'm being told that ANDI isn't the tool to be used, "It is not always reliable for eLearning content. We recommend you consider using additional tools for a comprehensive assessment."
But this is the tool the DHS Trusted Tester course recommends to be used - what tools is everyone using then for web testing?

Thank you in advance,
Claire

From: Ryan E. Benson
Date: Thu, Aug 29 2024 9:05AM
Subject: Re: DHS Trusted Tester_Testing support/input
← Previous message | Next message →

Hi Claire

I have sort of had this debate years ago, rather than software - it
was applicability of accessibility altogether - fun, fun times. If
this is for a federal government agency, I would suggest meeting with
the Section 508 Program Manager - found at
https://www.section508.gov/tools/program-manager-listing/.

If you are testing the end product, a course, it should be tested as a
website because it uses a browser to work. Now if the course is
somehow packaged up, and installed to the computer, or played via CD
and doesn't use a browser - then I can buy the software argument.
Honestly, in terms of standards, the WCAG is slightly easier to meet.

>failing headers not being recognized by ANDI
> Lastly, I'm being told that ANDI isn't the tool to be used, "It is not always reliable for eLearning content. We recommend you consider using additional tools for a comprehensive assessment."

In my experience, ANDI has issues with messy code. I'd recommend
reviewing the code and acting accordingly.


--
Ryan E. Benson

On Thu, Aug 29, 2024 at 10:47 AM Claire Forbes < = EMAIL ADDRESS REMOVED = > wrote:
>
> Hello everyone, I'm hoping for some input ...
>
> As a DHS Trusted Tester (completed in 2020), I test online courses from the end-users' experience - I'm provided a sandbox link for testing content, which is where a course lives until it's reviewed and deployed as live. When a course is live, participants are provided a weblink to a course, or they are enrolled in a course within a learning management system that lives on a website. I'm being criticized for testing these items as Web testing; being told I should be testing them as software since they are developed in Captivate, "Captivate Courses are not published as web products and therefore should be tested as 'Software.'"
> ... how is this a logical stance, or am I completely missing something here? It's quite possible, as I'm not a developer and have no experience with Captivate.
>
> I'm also being called out, and the DHS Section 508 Compliance Test Process document referenced, for failing headers not being recognized by ANDI - there are 100% "visually apparent" headings on the page that ANDI does not detect and ANDI sates "No headings, lists, landmarks, or live regions were detected." To me, this means the headings aren't "programmatically identified," as they should be based on section 9.2 Web: Section Headings, item A.
> So I'm being told "The ANDI tool is noting that there are no visually apparent headings on the page; It is a message stating a condition only. Note this message is not identifying a non compliance issue; this message is not red, yellow or orange which it would be if it were a compliance issue. For Section 9.2, the table indicates that this Test ID DOES NOT APPLY (DNA) "... if there are no visually apparent headings on the page..."
> However, this is not at all how I've interpreted the DHS test step and failure conditions for "visually apparent" headings. Any thoughts?
>
> Lastly, I'm being told that ANDI isn't the tool to be used, "It is not always reliable for eLearning content. We recommend you consider using additional tools for a comprehensive assessment."
> But this is the tool the DHS Trusted Tester course recommends to be used - what tools is everyone using then for web testing?
>
> Thank you in advance,
> Claire
> > > >

From: Hayman, Douglass
Date: Thu, Aug 29 2024 9:07AM
Subject: Re: - DHS Trusted Tester_Testing support/input
← Previous message | Next message →

Claire,

I believe that you're on the right path. As someone who also got the Trusted Tester cert in 2020 and tests both web sites and apps, I think that we can use WCAG as a structural reference point for examining the accessibility of a web site or an app.

Can one navigate through either using keyboard only?

When landing upon interactive form fields while using a screen reader, is there adequate information for the user to know what the label is and what format they are expected to enter?

Does the app or web site provide information with color alone? Or does it fail to meet color contrast standards?

Testing either an app or a web page we try to put ourselves in the position that a screen reader user, low vision user, speech recognition user and so on would be in.

Knowing how to interact with a web site, app or PDF file using a screen reader like NVDA has similar process as what we did in Trusted Tester process. Some of that used the ANDI tool, some called for manual steps.

A colleague who is a full time screen reader user on multiple platforms and operating systems reminded me that some apps are created with Electron. As I understand that as a non-developer you build it as a chrome-based site/app then export for use in browsers or as stand alone app for iOS or Android.

A recent vendor I'm testing provided me access to a desktop version and an iOS app and this product appears to have been built that way with similar access fails in each while some elements are only accessible on the desktop version.

Doug Hayman
IT Accessibility Coordinator
Information Technology
Olympic College
= EMAIL ADDRESS REMOVED =
(360) 475-7632

From: Jano Llorca Lis
Date: Thu, Aug 29 2024 9:18AM
Subject: Re: DHS Trusted Tester_Testing support/input
← Previous message | Next message →

Hi Claire,

Thank you for sharing your experience and concerns. I understand the doubts
that have arisen, and I’d like to offer some perspectives that might help
you address these points.

1.

Testing Captivate Courses as Web vs. Software:

The confusion about whether to test these courses as "web products" or
"software" is quite common and depends on the environment in which
end-users access the content. Adobe Captivate courses, even if they are
developed as SCORM modules and integrated into an LMS, are typically
accessible through a web browser. This means that, in terms of user
experience and accessibility, they should be evaluated under web
accessibility guidelines (WCAG).

Some people might suggest testing them as "software" because of the
interactive and multimedia elements that can behave differently, but the
logical stance is that if the content is accessed through a browser, it
should be considered web content. You should clearly document this approach
and how web accessibility standards apply in this case.

2.

Headings Not Detected by ANDI:

When ANDI reports "No headings, lists, landmarks, or live regions were
detected," it generally means that while there may be visually styled text
that appears as a heading, it has not been correctly coded with the
appropriate HTML tags. Your interpretation is correct: Section 9.2 of the
DHS test process states that headings must be "programmatically
identified." If not, there is a compliance issue.

The ANDI message stating "no visually apparent headings" merely indicates a
condition, but if there are visible headings that are not properly coded,
this is an accessibility issue. It is important for all involved to
understand that headings need to be programmatically identified to be
accessible to screen readers and other assistive technologies.

3.

Using ANDI for eLearning Content:

While it is true that ANDI is the recommended tool in the DHS Trusted
Tester course, it is best practice to use a set of tools for a more
comprehensive assessment, especially for interactive content like
eLearning. Tools like JAWS and NVDA (screen readers), AXE by Deque (a
browser extension providing accessibility checks), and WAVE by WebAIM are
excellent complements to ANDI for a more thorough analysis.

I recommend using a combination of these tools and manual testing methods,
such as Tab key navigation, to ensure a comprehensive evaluation of
eLearning content.

I hope these ideas are helpful and assist you in clarifying and supporting
your testing practices.

Best regards,
*Jano Llorca*
Consultor SEO - SEM - Social Ads
UX - UI - Diseño y Accesibilidad Web

Tlf: 673 346 726
= EMAIL ADDRESS REMOVED =

<https://ilumina-agencia-consultora-seo.business.site/>


El jue, 29 ago 2024 a las 16:47, Claire Forbes (< = EMAIL ADDRESS REMOVED = >)
escribió:

> Hello everyone, I'm hoping for some input ...
>
> As a DHS Trusted Tester (completed in 2020), I test online courses from
> the end-users' experience - I'm provided a sandbox link for testing
> content, which is where a course lives until it's reviewed and deployed as
> live. When a course is live, participants are provided a weblink to a
> course, or they are enrolled in a course within a learning management
> system that lives on a website. I'm being criticized for testing these
> items as Web testing; being told I should be testing them as software since
> they are developed in Captivate, "Captivate Courses are not published as
> web products and therefore should be tested as 'Software.'"
> ... how is this a logical stance, or am I completely missing something
> here? It's quite possible, as I'm not a developer and have no experience
> with Captivate.
>
> I'm also being called out, and the DHS Section 508 Compliance Test Process
> document referenced, for failing headers not being recognized by ANDI -
> there are 100% "visually apparent" headings on the page that ANDI does not
> detect and ANDI sates "No headings, lists, landmarks, or live regions were
> detected." To me, this means the headings aren't "programmatically
> identified," as they should be based on section 9.2 Web: Section Headings,
> item A.
> So I'm being told "The ANDI tool is noting that there are no visually
> apparent headings on the page; It is a message stating a condition only.
> Note this message is not identifying a non compliance issue; this message
> is not red, yellow or orange which it would be if it were a compliance
> issue. For Section 9.2, the table indicates that this Test ID DOES NOT
> APPLY (DNA) "... if there are no visually apparent headings on the page..."
> However, this is not at all how I've interpreted the DHS test step and
> failure conditions for "visually apparent" headings. Any thoughts?
>
> Lastly, I'm being told that ANDI isn't the tool to be used, "It is not
> always reliable for eLearning content. We recommend you consider using
> additional tools for a comprehensive assessment."
> But this is the tool the DHS Trusted Tester course recommends to be used -
> what tools is everyone using then for web testing?
>
> Thank you in advance,
> Claire
> > > > >

From: Steve Green
Date: Thu, Aug 29 2024 9:26AM
Subject: Re: DHS Trusted Tester_Testing support/input
← Previous message | No next message

You have raised a lot of different issues, so here are my thoughts on some of them:



* As far as I can tell, Captivate courses are published as HTML5. I cannot see any justification for “testing them as software”, whatever is even meant by that. And most software accessibility standards require pretty much the same as WCAG except where the success criteria are not relevant to an application not written in HTML. Just putting a SCORM wrapper around a web-based learning module doesn’t magically turn it into something else.

* You are correct with regard to headings. Tools identify programmatic heading, not visual ones. It’s the auditor’s job to identify visually apparent headings and verify that they are also conveyed programmatically. I would do that by inspection of the source code rather than using ANDI, but that’s a different topic.

* I really, really don’t like ANDI and never use it. We use a huge array of other tools, predominantly single-purpose bookmarklets, but also the Deque Axe and ARC Toolkit browser extensions. For testing a whole website, we use SortSite.

However, we only use the tools to support our manual testing. The pass / fail decision must always be based on inspection of the source code and user interface. You never trust what a tool tells you.

* This last point feeds into the wider observation that the Trusted Tester methodology is fatally flawed. It was devised to avoid the inconsistent results that invariably occur when different people test the same website. The strict methodology, which mandates using ANDI, ensures high repeatability, but at the cost of accuracy. This is a poor trade-off in my opinion.

We can train skilled testers so their results become more consistent, but the Trusted Tester methodology will always be inaccurate. In fact, it will probably become more inaccurate as coding techniques change and the methodology is not adapted to them. FWIW, I would refuse to ever use the Trusted Tester methodology because it forces you to do bad work.

Steve Green
Managing Director
Test Partners Ltd