About the author

Cameron Cundiff is a software engineer and leader in accessibility and inclusive design. He builds AccessLint and is the co-founder and organizer of the New …

More about Cameron ↬

Automated testing is an important part of any software project, including testing for accessibility. There are already tools for fluff and integration testing accessibility, but what about end-to-end testing with real tools? Since I had not seen it before, I tried to build Auto VO, a driver for the VoiceOver screen reader.

If you’re an accessibility nerd like me, or just curious about assistive technology, dig your Auto-VO. Auto-VO is a node module and CLI for automatically testing web content with the VoiceOver screen reader on macOS.

I created Auto VO to make it easier for developers, PMs and QAs to perform faster repeatable, automated tests with real-time assistive technology, without the intimidating factor of learning how to use a screen reader.

Let’s go!

Let’s see it in action first, and then I’ll go into more detail on how it works. Here’s going on auto-vo CLI on smashingmagazine.com to get all VoiceOver output as text.

$ auto-vo --url https://smashingmagazine.com --limit 200 > output.txt
$ cat output.txt
link Jump to all topics
link Jump to list of all articles
link image Smashing Magazine
list 6 items
link Articles
link Guides 2 of 6
link Books 3 of 6
link Workshops 4 of 6
link Membership 5 of 6
More menu pop up collapsed button 6 of 6
end of list
end of navigation
...(truncated)

It looks like a pretty page structure: we have navigation links, well-structured lists and semantic navigation. Good work! Let’s dig a little deeper, though. How about the headline?

$ cat output.txt | grep heading
heading level 2 link A Complete Guide To Accessibility Tooling
heading level 2 link Spinning Up Multiple WordPress Sites Locally With DevKinsta
heading level 2 link Smashing Podcast Episode 39 With Addy Osmani: Image Optimization
heading level 2 2 items A SMASHING GUIDE TO Accessible Front-End Components
heading level 2 2 items A SMASHING GUIDE TO CSS Generators & Tools
heading level 2 2 items A SMASHING GUIDE TO Front-End Performance 2021
heading level 4 LATEST POSTS
heading level 1 link When CSS Isn’t Enough: JavaScript Requirements For Accessible Components
heading level 1 link Web Design Done Well: Making Use Of Audio
heading level 1 link Useful Front-End Boilerplates And Starter Kits
heading level 1 link Three Front-End Auditing Tools I Discovered Recently
heading level 1 link Meet :has, A Native CSS Parent Selector (And More)
heading level 1 link From AVIF to WebP: A New Smashing Book By Addy Osmani

Hmm! Something’s a little funky with our heading hierarchy. We need to see an outline, with one heading level one and an ordered hierarchy thereafter. Instead, we see a bit of mishmash from level 1, level 2, and a wandering level 4. This needs attention because it affects the screen reader users’ experience navigating the page.

It’s great to run the screen reader as text, because this kind of analysis becomes much easier.

Some background

VoiceOver is the screen reader on macOS. Screen readers let people read aloud application content and interact with content. This means that people with poor eyesight or blindness can, in theory, access content, including web content. In practice, however, 98% of the landing pages on the internet have obvious errors that can be fixed with automated testing and review.

There are many automated testing and review tools out there, including AccessLint.com for automated code review (disclosure: I built AccessLint), and Ax, a common use for automation. These tools are important and useful, but they are only part of the picture. Manual testing is equally or perhaps more important, but it is also more time consuming and can be intimidating.

Perhaps you have heard guidance to ‘just turn on and listen to your screen reader’ to give you a sense of the blind experience. I think it’s wrong. Screen readers are sophisticated applications that can take months or years to master and are overwhelming at first. Using it randomly to simulate the blind experience can lead you to pity blind people, or worse, try to restore the experience in a wrong way.

I’ve seen people panic when they turn on VoiceOver without knowing how to turn it off. Auto-VO manages the life cycle of the screen reader for you. It automates the launch, control, and closing of VoiceOver so you don’t have to. Instead of trying to listen and keep track, the output is returned as text, which you can then read, evaluate, and capture as a reference in an error or for automatic recording.

Use

Warning

Due to the requirement to enable AppleScript for VoiceOver, it may be necessary to customize the setup of CI builders.

Scenario 1: QA and acceptance

Suppose I (the developer) have a design with blueline notes – where the designer has added descriptions of things like accessible name and role. After building the feature and checking the formatting in Chrome or Firefox tools, I want to send the results to a text file so that my PM can compare the screen reader output to the design specifications if I mark the feature as full. . I can do this by using the automated CLI and exporting the results to a file or the terminal. We saw an example of this earlier in the article:

$ auto-vo --url https://smashingmagazine.com --limit 100
Scenario 2: Test-driven development

Here I am again as a developer and expanding my feature with a blueline annotated design. I want to test the content so that I do not have to reflect the layout again to match the design. I can do this with the automatic vo-node module that is entered into my preferred tester, for example Mocha.

$ npm install --save-dev auto-vo
import { run } from 'auto-vo';
import { expect } from 'chai';

describe('loading example.com', async () => {
  it('returns announcements', async () => {
    const options = { url: 'https://www.example.com', limit: 10, until: 'Example' };

    const announcements = await run(options);

    expect(announcements).to.include.members(['Example Domain web content']);
  }).timeout(5000);
});

Under the hood

Auto-VO uses a combination of shell scripting and AppleScript to power VoiceOver. While browsing the VoiceOver application, I came across a CLI that supports launching VoiceOver from the command line.

/System/Library/CoreServices/VoiceOver.app/Contents/MacOS/VoiceOverStarter

Next, a series of executable JavaScript manages the AppleScript instructions for navigating and capturing VoiceOver announcements. For example, this script gets the last phrase from the screen reader announcements:

function run() {
  const voiceOver = Application('VoiceOver');
  return voiceOver.lastPhrase.content();
}

In Closing

I would love to hear your experience with auto-vo, and welcome contributions to GitHub. Try it and let me know how it goes!

Smashing Editorial
(vf, hy)



Source link