The Washington Post has an interesting article yesterday about teaching computers to recognize images. Computers do not currently have the ability to determine whether an image of a cat is actually a cat or whether it is a dog, a human, or a telephone booth. But this new technology is teaching computers to better recognize images. It learns over time by having humans describe images in an online matching-type game – and millions of images have been and continue to be identified, thus the computer system can begin to recognize elements of images over time to better determine what the image contains. Google is also getting on board with such technologies.
Because everyday computers cannot currently do this type of processing, developers instead provide alternative text for all non-text elements. These upcoming technologies, could describe images without the need for developers to provide the alternative in text form. However, the computer will likely never be able to determine the content of images. Yes, they might be able to determine if a picture of a cat is a cat, but maybe ‘cat’ is not what the page author is trying to convey. I can imagine lots of descriptions of “blue right arrow”, when “next” is the real content.
As I’ve noted before, alternative text is about the content that is being conveyed and it should rarely be the description of an image. Unfortunately, the web is full of images that have alternative text that describes the image rather than conveys the content of the image. While this exciting technology may provide great advancements for images that do not have alternative text defined, I hope it does not somehow become an excuse for developers to not provide equivalent alternative text for all images.