Skip to content

How to contribute tests

Matt King edited this page Aug 6, 2023 · 2 revisions

Note: These instructions are incomplete and may change a bit in the near future.

Please provide questions/feedback for these instructions in issue 18.

What kinds of tests are we writing?

The ARIA-AT test suite aims to give web authors the ability to confidently create fully accessible websites by summarizing which ARIA attributes and roles are supported across all assistive technologies. To meet this goal, test authors must keep in mind both assistive technology users, as it is their experience we are aiming to summarize, and web authors, as web authors will be consuming the results of these tests and adjusting their code accordingly. The tests will also be written against the ARIA Practices Guideline's example widgets in order to be able to display relevant support information side-by-side with best-practice examples of interactive web widgets.

Each ARIA-AT test describes a realistic interaction that a user might perform on an example widget. As a result, these tests do not test a single attribute or role in isolation, but instead a test a common combination of attributes and roles.

ARIA-AT tests must be generic. Tests are written in terms that can be used to describe the interaction of a user who uses any screen reader technology. For this iteration of the project, we are writing tests specifically for the screen readers Jaws, NVDA and VoiceOver. In the future, we intend to further generalize these test to apply to other assistive technologies.

How to contribute tests

If you are writing tests for a new or untested example widget, you should first write a "test plan". A test plan is a comprehensive list of tests that should be written. For this test suite, a test plan is a list of all realistic interactions a user might have with the example widget. After you have this list, you can write tests for each interaction in the appropriate format.

Select a test plan to work on

See the Test Plan Workflow project board for which test plans are in progress and which are in the backlog.

If you want to pick up something in the backlog, file a new issue for it and edit the card in the project board to link to the new issue.

Review the process in the working mode document -- the columns in the project board correspond to steps in the process.

Outline all interactions to test in a test plan

When writing the test plan, answer the following questions:

1. What are the ways user will interact with this widget?

A widget can be comprised of one or more interactive parts (such as a menubar full of menuitems that control submenus with more menuitems). For each part of the widget, list all possible interactions. There are categories of interactions that apply to all widgets, such as:

  1. User navigates to the widget (or a part of the widget)
  2. User changes the state of the widget (or a part of the widget)
  3. User navigates through a set of items or past the boundary of a group of items.

2. What information should be communicated to the users by the assistive technology as a result of each interaction?

Here are some examples of information that should be communicated for the previous interaction categories:

a. User navigates to the widget (or a part of the widget)

Consider: What should the assistive technology communicate after the user navigates to the widget? List all of the information that the screen reader should provide to the user, such as:

  • Roles
  • Accessible Name
  • Attributes
  • Instructions for using the widget

b. User changes the state of the widget (or a part of the widget)

Consider: What should the assistive technology communicate after the user operates or changes the state of the widget, such as checking a checkbox or opening a submenu? List all of the information that should be provided, such as:

  • Changes of state
  • Further instructions

c. User navigates through a set of items or past the boundary of a group of items.

Consider: What should the assistive technology do after the user navigates through items of the widget, such as items or a tree, menu or grouped form elements? List all of the information that should be provided, such as:

  • The boundaries of groups should be expressed when navigating between items in the widget
  • The location of an item within a set of an item should be communicated as a user navigates through the items in a widget.

3. When exactly should that information be communicated?

Each interaction should be thought of as a single key command performed by a user using an assistive technology. Do not think of the test as a series of steps, but as a single step. If the interaction requires action from the user to get the widget into the proper state before the interaction (for example, opening a submenu to navigate to a submenu’s item) then these steps are part of the “set up” for the test. For this test plan, think only of the exact key command that should trigger the AT behavior you are aiming to observe? The answer typically that multiple different commands can be used to perform this interaction. For example, if the user interaction is "navigating to a checkbox", there are multiple ways that navigation can be performed. The user can read the next line with the down arrow in reading mode, or tab through tab indexed items in interaction mode.

For this step, test writing requires deep familiarity with all the screen readers this test applies to. If you are writing a test but are only familiar with one screen reader, you should seek out users of other screen readers to help write the list of commands for this interaction for other screen readers.

4. Which roles and attributes help supply information conveyed by the AT during this interaction?

After writing your list of tests, list every role and attribute (either implicit or explicit) that is used by the AT to describe the widget to the user. By the end of your test writing for a widget, there should be at least one test for each attribute used within the widget.

Encode each interaction into a test

A test plan is composed from information in several CSV and MJS files. To execute the below procedures, you will need to follow the requirements defined in the Test Format Definition.

Contribute tests for new design pattern:

  1. Create a new folder for the test plan (replace "design-pattern" with something else): tests/design-pattern/
  2. Copy tests/checkbox/data to tests/design-pattern/data
  3. Change the csv files. The information in these files is documented below.
  4. Write setupScript javascript function in javascript files, located at: tests/design-pattern/data/js
  5. Run node scripts/create-tests.js tests/design-pattern

Contribute tests to existing design pattern, or edit tests:

  1. Clone repo, open excel sheet of tests, edits or add tests.
  2. Edit or add setupScript javascript files.
  3. Re-run script.
  4. Check in files.
Clone this wiki locally