Thread Subject: Re: Sharing Market Research / Testing Results
This archival content is maintained by WebAIM and NCDAE on behalf of TEITAC and the U.S. Access Board . Additional details on the updates to section 508 and section 255 can be found at the Access Board web site.
From: Jim Tobias
Date: Fri, Jun 01 2007 6:40 AM
- Return to this mailing list's archives
- View all messages in this thread
- Next message in thread: None
- Previous message in thread: Baker, Robert C.: "Sharing Market Research / Testing Results"
- Messages sorted by: Author | Thread | Date
Thanks, Robert, for your response. I think your point about improving
testability by progressively standardizing "testing methodologies and
reporting formats" is a wise one. I see this as a process of developing
a "community of practice". These communities are rarely mandated into
existence; they are the result of years of sharing approaches and results.
> -----Original Message-----
> From: Baker, Robert C. [mailto: = EMAIL ADDRESS REMOVED = ]
> Sent: Friday, June 01, 2007 8:01 AM
> To: = EMAIL ADDRESS REMOVED =
> Subject: [teitac-subparta] Sharing Market Research / Testing Results
> Jim Tobias wrote:
> 2. Regarding the economic impact of such activities, I would
> argue that they would result in a net savings to the government.
> Can't we assume that some market research and especially
> testing is duplicated across agencies, of identical products
> and services?
> Publishing these results should cost almost nothing; I'm not
> talking about establishing some Central Depository somewhere,
> but creating an online resource -- an extension to
> BuyAccessible, perhaps -- where suitably authorized
> procurement and 508 personnel could post and examine these reports.
> Robert Baker's Response:
> In order for this suggestion to be properly implemented, each
> agency would need to adopt a consistent testing methodology
> and reporting
> format. At SSA, we have a very extensive Section 508 compliance
> testing methodology, perhaps one of the most extensive in the
> federal government. For those agencies that do perform
> independent compliance validation testing (and there are not
> many that do), the depth and
> breadth of testing varies extensively. To a third party
> reviewing test
> results, this could be very confusing. A consistent testing
> methodology and reporting format would support
> apples-to-apples comparisons, and assist the third party with
> determining how reliable those conclusions are.
> I would also like to add that the way an agency determines
> how well a product meets the standards, and how well the
> product overall meets the standards, can be very subjective.
> Based on a review of the standards that were proposed last
> week, some improvement has been made towards testability but
> in other cases we have moved away from testability (for
> example - the proposed change to 1194.21 (d) - the API
> standard - provides needed guidance to application developers
> but cannot be easily tested but a third party who does not
> have access to the code). In addition, the Functional
> Performance Criteria remains fully subjective, not to mention
> that we have no agreement on what "comparable access"
> means and how to evaluate it consistently. This level of
> subjectively would need to be properly described in testing
> reports for third party users to understand how an agency
> arrived at their testing conclusions.
- Next message in Thread: None
- Previous message in Thread: Baker, Robert C.: "Sharing Market Research / Testing Results"