Cross-posted from Accessites for comments:
Accessibility has as much to do with usability than being purely technically correct. The site needs to have clear navigation, the ability to skip content areas, offer alternative layouts and be written in an easily understood style by the anticipated audience. Can an expensive evaluation tool be justified and are site-wide checks using such a tool actually required (rather than just for a “feel good” factor of control over the situation) post-production?
Scope
To assess and discuss the benefits and limitations of using an automated evaluation tool to assess the technical accessibility of a standards-compliant website.
I’ve broken this research into several areas:
The Usefulness of Automated Tools
Perhaps one of the quickest ways to get a feel for the accessibility of a website is to run it through an automated evaluation tool. There are many such products available all with their strengths and weaknesses. Some are free, such as TAW 3 and some are expensive. Typically, once I’ve completed a web document I will use Chris Pederick’s Web Developer Toolbar for Firefox by selecting the tools option and fire off the “Validate CSS,” “Validate HTML” and “Validate WAI” options. I also do this when checking submissions to Accessites against the base Criteria. Any problems and I stop. If the tools report okay, then I carry on checking the integrity of the website without CSS and/or images and go through the source code making sure those for
attributes on your labels match up to the correct input id
for example.
Some tools evaluate a single page (such as the “Validate WAI” option above) whilst others like TAW 3 might crawl through the entire site. I really like TAW 3 and recommend it to content authors. The test configuration can be saved — so for example I can set this up during user training and all the user then need do is press a button to start the assessment. Where this product wins for me though is that it helps to educate the user by highlighting which checkpoints require manual checking. Due diligence is essential.
With all that said though — these tools can only test in ones and zeroes, black and white, yes or no. Many of the guidelines need to be reviewed in context to their use and that can only be done by a trained human.
Limitations of Automated Tools
There are 65 guidelines in WCAG 1.0, (16 priority 1 checkpoints, 30 priority 2s and 19 priority 3s).
Automated tools can wholly test the following checkpoints
- Priority 2
- 3.2 – Create documents that validate to published formal grammars.
- 11.2 – Avoid deprecated features of W3C technologies.
- Priority 3
- 4.3 – Identify the primary natural language of a document.
A number of the online parsers tend to stop checking for checkpoint 11.2 as soon as they hit the first failure. So, if you have, for example, align=right
associated with a div
high up in the markup and a border
attribute associated with an img
element lower down, only the align
will be highlighted as a failure. The document will require a second pass through the parser once the first issue has been corrected before the second failure will be identified. If you’re using a transitional DOCTYPE
it is possible to pass validation yet still fail 11.2 by using deprecated markup — yet another reason to use a Strict DOCTYPE
.
Automated tools can partially test the following checkpoints
- Priority 1
- 1.1 – Provide a text equivalent for every non-text element (e.g., via
alt
,longdesc
, or in element content). This includes: images, graphical representations of text (including symbols), image map regions, animations (e.g., animated GIFs), applets and programmatic objects, ascii art, frames, scripts, images used as list bullets, spacers, graphical buttons, sounds (played with or without user interaction), stand-alone audio files, audio tracks of video, and video. - 6.3 – Ensure that pages are usable when scripts, applets, or other programmatic objects are turned off or not supported. If this is not possible, provide equivalent information on an alternative accessible page.
- 9.1 – Provide client-side image maps instead of server-side image maps except where the regions cannot be defined with an available geometric shape.
- Priority 2
- 3.4 – Use relative rather than absolute units in markup language attribute values and style sheet property values.
- 6.4 – For scripts and applets, ensure that event handlers are input device-independent.
- 7.4 – Until user agents provide the ability to stop the refresh, do not create periodically auto-refreshing pages.
- 7.5 – Until user agents provide the ability to stop auto-redirect, do not use markup to redirect pages automatically. Instead, configure the server to perform redirects.
- 9.3 – For scripts, specify logical event handlers rather than device-dependent event handlers.
- 12.4 – Associate labels explicitly with their controls. Sure they can detect the presence of the
for
andid
attributes in thelabel
andinput
tags but it will take a human to check you’ve associated the right labels correctly. - 13.1 – Clearly identify the target of each link.
Of these programmatic tests the following checkpoints fall into a web content author’s space: 1.1, 3.2 and 11.2. The remaining checkpoints are applicable to web developers only and fall into three main categories:
- The templates.
- The cascading style sheets.
- The functionality and interaction of the website (JavaScript, PHP. ASP, image maps etc).
Quality control at the bench
Checkpoints 1.1 and 3.2 can be legislated against with user training and a properly configured, standards-compliant text editor. Additionally, configuration of the text editor can disallow the use of deprecated elements (font, underline, marquee etc) and so satisfy checkpoint 11.2. The final check before publishing the page to the live server should more likely be a quick trip to the W3C markup validator and thus neatly sidestep all but one of the checkpoints an accessibility tool can wholly test for… someone remind me why our non-specialist managers insist on buying these tools!
For checkpoint 13.1, automated tools can check whether link text is repeated for links to different pages (e.g. “click here”), or if the same page is linked to by different text. Again, compliance with this checkpoint can be achieved through user training.
Before a page was promoted from pre-production the reviewing editor must ensure that the markup is valid using the free W3C online validator. Additionally, the reviewing editor should check for appropriate structure (semantic HTML and Priority 2 checkpoints 3.5, 3.6 and 3.7).
Templates and CSS files should be validated in pre-production after each iteration. This is easily done using the free W3C online validators. Once the website was live, testing against checkpoint 3.2 would be required after a change was applied to a template/CSS file.
The processes above need to be backed up by user training and an enforceable accessibility policy that lays out requirements and responsibilities. If you believe that an automated assessment tool will bring peace of mind remember their limitations and plan for them. Personally, I don’t feel the need to pay for them.
Excellent article Karl
Me neither, and for 99% of developers the same should be true. If you’ve got money to spend on testing, spend it on pan-disability user testing and download TAW3 instead. You’ll get far more value for your spondulicks.
Thanks for the article – will add it to my ‘problems with tools’ list. The one that gripes me lately is people who pass Use relative rather than absolute units but set their size to 70% or even 60%, which is totally unreadable for many, and hard for most of us to read!
God article. I am also a big fan of using TAW3 as a first port of call.
Andrew: Re your comments about text size. You are right about people making text difficult to read. There is a recognised technique of first levelling absolute font sizes across browsers by setting BODY font size to 76%. That however should be followed up by then declaring the font sizes of each required element to be relative in terms of ems i.e. 1.2em, 1.5em etc.. rendering them reasonable if appropriatevalues are set.
Pingback: 456 Berea Street
Except Web Developer Toolbar for Firefox also alvaible is Web Accessibility Toolbar for Opera. At an early date for Firefox too.
BTW. You have to specify the expansion of two abbrevation W3C in this article.
Thanks for recommending TAW. It looks interesting, and the price is difficult to argue with.
I think there’s a gap in the market for a really good tool that makes accessibility testing easier, quicker and less error-prone. As you state, most accessibility guidelines can’t be verified by machines, so “validation” isn’t possible (hence I think the option in the web developer toolbar needs renaming). But a tool that made it easier and quicker to test accessibility and help reduce human errors in the process, would reduce cost and improve quality.
I think there’s a lot of scope for such a tool, and I think it would be a shame is people saw accessibility testing tools as a lost cause. Simply because humans have to evaluate a guideline doesn’t mean a tool couldn’t make evaluation easier, quicker and more reliable.
@Wojciech Bednarski: Any mark-up errors of that sort would be my fault so you can blame me, but in my defense I will add that as a practice I commonly mark-up and expand the abbreviation of the first instance of said of the abbreviation per page, section or article only. This is also common to the print world. My reasonsing is that to do otherwise is unnecessarily verbose and really not needed. While it’s true that web content can be taken out of context so care must be taken, typically an artcile such as this would be delivered to all users in its full form.
–Mike
Good article. For me standards can only take you so far. It is good quality control to go through validators and follow WCAG guidelines even thought they may be flawed. WCAG 2.0 is more subjective than WCAG 1.0 so there is no substitute for watching real users use the site. That includes able bodied as well as disabled users. Standards have got responsible developers to a good position but we mustn’t become obsessed with minutiae. Testing sites with real people (both able bodied and disabled) is in my opinion the answer to how usable and accessible a site is.
You’re right about Taw it is an excellent tool, and due to be updated soon according to the developers although when I have contacted them previously the deadlines for an update are always ‘shortly’ so it may not… I guess we can’t compain too much since it is free and far more useful than any other paid for or free tool I have found.
I think that the automatic accessibility evaluation tools are very useful. If you know the WCAG, they can detect some errors that you forgot in a manual revision. And if you don’t know de WCAG, is a great for learn.
But, I believe that the best evaluation tools are those that are a mix of manual and automatic revision, for example HERA. Try it!
I have used TAW on a number of occasions, but really only to give an indication of the scale of problems that a web page/site may have. I generate free summary reports on request using this tool, but add in a technical tip and give caveats and recommendations to use manual testing. I certainly wouldn’t recommend paying for a tool at the moment. No doubt tools can become more intelligent and be able to make better judgments about things like whether the alt text for an image is appropriate or not, but I still wouldn’t trust them for the next 100 years, by which time it will hopefully not be possible to create an inaccesible website!
http://www.accessibleweb.eu/