WebAIM - Web Accessibility In Mind

The WebAIM Strategic Accessibility Framework
Indicator 4: Assessment and Continuous Improvement

Introduction to Indicator 4

Ongoing assessment is necessary to ensure that the organization's accessibility implementation plan is effective and on target. Processes must be in place to measure elements of the plan, such as communication or training. Assessment must also be in place to evaluate overall progress, stakeholder satisfaction, and outcomes. Assessment information can be used to determine the sustainability of current efforts and identify areas for improvement in the overall program.

Strategic Accessibility Training & Consultation

WebAIM has helped hundreds of clients at different stages of their accessibility journey to use strategic thinking and build a sustainable accessibility program. Our comprehensive strategic accessibility training provides a detailed look at components of successful accessibility initiatives. Our customized consulting can help your organization create policies, plans, and processes to support this framework.

Benchmark 1: Evaluation of Implementation

Organizations should take measures to ensure that the plan is implemented as intended. This includes evaluating the plan, its scope, communications, training, support of staff, and timelines. Progress is monitored and evaluated to determine if implementation is on track. Then, any necessary adjustments to the plan are identified, made, and communicated.

Note on each Benchmark's statements of evidence:

These statements of evidence are examples and are not all required to reach a benchmark. There are many ways to align these benchmarks to an organization's culture. Understanding the general idea of the benchmark itself will support determination if it is present or not.

1. Collection and analysis of data or information about an organization's progress with implementation

Ways to determine if this is present
  • The organization collects information about progress on implementing the accessibility plan.
  • Data collection and analysis are conducted on different components of the plan.
    • Scope
    • Benchmarking
    • Communication
    • Budget
    • Personnel
    • Training and Support
    • Timelines and Metrics
    • Outcomes
  • There is documentation that the organization evaluates progress to determine if implementation is occurring at predicted levels.
    • Is this evaluation used to identify problems in implementation?
  • There is evidence that the organization uses issues found in evaluation to adjust and improve the plan.
  • There is evidence that mechanisms are in place to consistently communicate findings and changes to the affected stakeholders.

2. Formal reports on the progress of the intended implementation plan

Ways to determine if this is present
  • The organization creates formal reports on implementation progress.
  • Reports review different components of the plan.
    • Scope
    • Benchmarking
    • Communication
    • Budget
    • Personnel
    • Training and Support
    • Timelines and Metrics
    • Outcomes
  • Reports include information from diverse sources representing a range of different viewpoints.
  • Reports provide insight into the organization-wide process that may not be apparent when components are reviewed in isolation.
  • Reports communicate a useful picture of current progress.
  • Reports provide information on any implementation issues found and actions taken.
  • Reports are used to make adjustments to the accessibility implementation plan.
  • Reports discuss changes or edits made to the implementation plan.

3. Informal summaries or communications on the progress of the implementation plan

Ways to determine if this is present
  • There is evidence that there are mechanisms in place to collect and track informal information on plan progress.
    • Emails
    • Updates
    • Unofficial reviews
    • Feedback appraisals
    • Accessibility checks
  • There is documentation that informal information is used to identify potential issues or problems.
    • There is evidence that actions are taken to alleviate or resolve problems before they become critical.

Benchmark 2: Evaluation of Functional Web and Digital Accessibility Outcomes

A plan or policy is only useful if it achieves the intended outcome. Those responsible for improving functional web and digital accessibility should regularly evaluate its status to ensure that products meet the organization's technical standard. Automated accessibility evaluation tools are valuable, but must not be the only instrument used to evaluate accessibility. Manual evaluation in conjunction with automated tools will provide a far more complete picture. As technology and standards evolve over time, the organization should assess whether the original outcome is still sufficient or if updates are needed to align with current practices and standards.

1. Collection and analysis of organizational web accessibility data

Ways to determine if this is present
  • There is documentation that evaluations are scheduled to ensure that accessibility outcomes meet expected levels for:
    • The organization's policy
    • The technical standard
    • Plan milestones
  • There is evidence that there is a reasonable cycle and timeline for assessments.
    • Is formative data collected?
    • Is summative data collected?
    • Is data collection ongoing?
  • There is documentation that personnel are specifically assigned to oversee or conduct these assessments.
  • There is comprehensive documentation on how web and digital accessibility is evaluated for the organization's digital content. This may include:
    • Automated tools used
    • Manual testing techniques applied
    • Assistive technologies used in evaluation
    • Evaluation checklists or process documentation
    • Who is responsible for performing evaluations
      • Do they have appropriate qualifications or certifications?
  • A reasonable sample is evaluated. (Note that the sample may contain web pages as well as content such as Microsoft Word, Microsoft PowerPoint, or Adobe PDF documents as well.)
    • What percentage of pages on the organization's website are evaluated?
    • Are enough pages sampled to provide an accurate picture of the organization's website?
    • Are the sampled pages representative?
      • Are different parts of the organization's web site included in the sample?
      • Are all page types, including formats like PDF or Microsoft Office and third-party generated pages, specified in the policy and plan included in the evaluation schedule?
      • Does the sample include examples of each template used in the site?
      • Does the sample include examples of different interactions available to site visitors, such as event calendars or date pickers?
    • How is sample content chosen?
      • Is the sample randomly selected?
      • Is the sample selection based on other criteria?
        • Page visits?
        • Importance to the intended audience?
  • Content that is found to be accessible continues to be checked over time to ensure that it remains accessible.
  • There is evidence that evaluation strategies are reviewed as technology and standards change over time.
    • Are changes made to the strategy to ensure that outcome collection is in line with current standards and practices?
  • There is documentation that the results of evaluations are disseminated.
    • Are the results disseminated to important stakeholders? (e.g., leadership, the organization's web and digital accessibility committee, those who must make content accessible, and those with disabilities)
    • How widely are the results disseminated?
  • There is evidence that the results are used in meaningful and productive ways.
    • To make adjustments to the plan.
    • To identify areas/personnel requiring additional assistance or who may serve as support for others.

2. The organization develops reports containing web and digital accessibility data or summaries

Ways to determine if this is present
  • The organization creates reports or summaries on evaluation data.
  • Reports include information about evaluations across all affected parts of the organization's digital properties.
  • Reports provide insight into the organization-wide process that may not be apparent when components are reviewed in isolation.
  • Reports communicate a useful and understandable picture of current progress.
  • Relevant 3rd party or external web pages not under the organization's domain.
  • Reports are disseminated to relevant stakeholders.
  • Reports provide information on any implementation issues found and actions taken.
  • Reports discuss changes or edits made to the current plan.
  • Results are used in meaningful and productive ways.
    • To make adjustments to the plan
    • To identify specific topics to emphasize and prioritize in training and support resources
    • To identify areas/personnel requiring additional assistance or who may serve as support for others

3. If external evaluation is conducted, the results are utilized

Ways to determine if this is present
  • There is documentation of accessibility audits performed by external reviewers.
  • There is documentation about who conducted these outside evaluations.
    • Peer organizations
    • Digital accessibility groups
    • Web standards specialists
  • There is evidence that the results of the external reviews are in line with internal data collection.
    • If not, are internal collections strategies reviewed and modified as necessary?
  • Results are included in organizational reports.
  • Reports are disseminated to relevant stakeholders.
  • There is evidence that the results are used in meaningful and productive ways.
    • To make adjustments to the plan
    • To identify specific topics to emphasize and prioritize in training and support resources
    • To identify areas/personnel requiring additional assistance or who may serve as support for others

4. Correspondence describing accessibility outcomes are collected and used

Ways to determine if this is present
  • There is documentation of mechanisms used to track correspondence between administrators, key personnel and stakeholders regarding accessibility.
  • There is evidence that this correspondence is used:
    • To help monitor progress of the accessibility implementation plan
    • To identify potential issues or problems
      • Are actions taken to alleviate or resolve problems before they become critical?
    • To identify areas/personnel requiring additional assistance or who may serve as support for others

Benchmark 3: Evaluation Results Are Used To Improve Accessibility

Evaluation data on web accessibility are valuable only if they improve accessibility in digital products. Those tasked by the organization to improve accessibility use ongoing oversight and review data sources continually to revise procedures to ensure the organization can create and maintain functional web and digital accessibility. These same data can be used for future policy changes.

1. The development and use of reports that reflect data-based recommendations for change

Ways to determine if this is present
  • Documents available that recommend changes or actions based on evaluations and data collected.
    • Note: These documents can be recorded in a range of formats including reports, meeting minutes, or correspondence.
  • There is documentation that recommendations come from a variety of sources.
    • Formal Reports
    • Informal Reports
    • Accessibility Audits
    • Outside Evaluations
    • Communications from key personnel
  • There is evidence that a range of key personnel are involved in making recommendations.
  • There are indications that recommended changes or actions target the appropriate areas.
    • Policy
    • Plan Components
      • Scope
      • Benchmarking
      • Communications
      • Budget
      • Personnel
      • Training and Support
      • Timelines and Metrics
      • Outcomes
      • Assessments
    • Processes
  • There are recommended changes or actions prioritized or provided with suggested timelines.
  • There is evidence that data reviewed in an ongoing schedule and new recommendation reports are developed as necessary.

2. Documentation that describes how data sources inform the organization's accessibility efforts

Ways to determine if this is present
  • If the organization is in a phase before data collection has begun or is between collection cycles, there is documentation on how data sources will inform efforts once data is collected.
    • Are priority issues specified?
  • Targets for improvement are included for data collection. Examples may include:
    • After a year of training and supporting staff to produce accessible Word documents, Word documents are included as a focus of data collection and review.
    • After an awareness campaign runs for six months, a survey of staff is conducted to determine if their level of awareness about their responsibility for accessibility has grown.
    • After targeting placement of accessibility skills into relevant job roles and role descriptions, a sample of job posts and roles is reviewed to determine if accessibility is included.
    • After working to get accessibility work into relevant employee performance evaluations, staff who should have this included in evaluations are randomly selected to determine if it was present.
  • Contingencies for severe issues are considered. Examples may include:
    • Replacing a tool or platform with poor accessibility
    • Contacting a third-party vendor to raise critical issues
    • Proactive accommodation planning
  • There is evidence that mechanisms are in place that can help serve as early warning indicators for critical aspects of the plan in advance of an assessment cycle.
    • Is there documentation that information from these mechanisms is being tracked and used to prevent or mitigate potential problems?