Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design V2 of test format to enable variable AT setting, command, and assertion mappings for a test #974

Closed
mcking65 opened this issue Aug 6, 2023 · 25 comments
Assignees
Labels
documentation Related to documentation about the ARIA-AT project or its deliverables enhancement New feature or request

Comments

@mcking65
Copy link
Contributor

mcking65 commented Aug 6, 2023

Problem

Consider tasks that are tested by the slider test plan, e.g., navigate forwards to a slider, read information about a slider, and increment a slider. The current test format has the following problems:

  1. Because the test format requires all commands in a test to be executed using only one mode of operation, e.g, reading or interaction, we have to have:
    • Separate JAWS and NVDA tests for navigating to the slider in reading and interaction modes.
    • Separate navigation and operation test for macOS that do not specify a mode because VoiceOver does not have reading and interaction modes.
  2. Because the test format does not support specifying that a VoiceOver command should be performed with quick nav on, there is no way to write a test that includes quick nav commands.
  3. Because every assertion must be evaluated for every command, it is not possible to include assertions in a test that do not apply to all commands. Consequently:
    • To test that JAWS and NVDA switch to interaction mode when pressing Tab in reading mode, the current format would require a completely separate test with a different "task".
    • JAWS and NVDA have multiple commands that provide information about the current element, e.g., insert+up and insert+tab. By design, insert+up may provide less information than insert+tab. The current test format cannot include all relevant commands in the same test unless they all support the same assertions.

As a result, even though all the slider tests cover only 9 unique tasks, there are 21 tests in the slider plan! Further, even with 21 tests, they still do not cover important behaviors, e.g., mode switching for JAWS and NVDA, nor do they cover some important ways of using VoiceOver. Together, these constraints are having significant negative impacts on the readability, understandability, and completeness of ARIA-AT reports.

In addition to the above constraints on what can be tested and reported, there are multiple characteristics of the current test plan composition format that make writing and maintaining the tests unnecessarily difficult.

Solution

Make changes to the test format that:

  1. Remove AT mode from the definition of a test.
  2. Enable a setting to be specified for each command, e.g., the JAWS virtual cursor is active or VoiceOver quick nav is on.
  3. For each command, specify which assertions apply and which do not.
  4. Remove the ability to scope a test to only specific AT. All tests apply to any AT covered by the plan.
  5. Define test to command mappings in separate files for each covered AT to enable addition of new AT without touching any existing files, improving change history clarity in GitHub.
  6. Simplify developing new test plans and make the CSV files more readable.

New Test Format Definition

This wiki page provides a draft of a V2 test format that includes the following changes.

  1. Makes the following changes to tests.csv:
    • Removes the following columns: task, appliesTo, mode, refs, and setupScriptDescription.
    • Replaces the set of multiple numbered assertion columns with a single assertions column that specifies IDs of assertions defined in a separate assertions.csv file.
    • Adds a presentationNumber column for controlling the order of test presentation.
    • Supports 3 assertion priorities.
  2. Adds an assertions.csv file that enables:
    • Specifying default priority for an assertion.
    • Specifying multiple wording options for an assertion (assertionStatement and assertionPhrase).
    • Supports tokens in assertions that allow specification of AT-specific language.
    • Specifying the refs for an assertion
  3. Replaces commands.csv with multiple AT-specific command files, e.g., jaws-commands.csv. The new command mapping files have a new, simpler format. Each row represents a single test/command pair. This format allows:
    • Specifying that some assertions do not apply to a specific command or that they have a different priority for that command ; the default assumption is all assertions have the priority specified in assertions.csv and that they all apply.
    • Specifying that specific settings must be active when a command is executed. Thus, one row could specify performing a test using Tab with virtual cursor active and another row could specify performing the same test using Tab with PC cursor active.
    • Simplifies how commands are written by replacing references to keys.mjs with references to a new commands.json.
    • Specifying the presentation order for commands within a test.
  4. Adds a scripts.csv that specifies the setupScriptDescription for each setup script.
  5. Replaces keys.mjs with a commands.json file that simplifies how commands are specified.
  6. Consolidates AT specific rendering information in support.json.

Using this format, the slider test plan, which has 21 tests using the current format, can be reduced to 9 tests. In addition, it is very simple to specify mode switching tests as well as VoiceOver quick nav tests. PR #975 contains a draft of the slider test plan using this format.

Build script modifications

Changes to the build scripts will be specified and tracked by #977.

App changes

To-do: Define required app changes, e.g., addition of commandSettings to command display.

@mcking65
Copy link
Contributor Author

mcking65 commented Aug 7, 2023

@jscholes, we will discuss this in our Tuesday meeting. Please study ahead of then if at all possible. This could take a couple of hours to fully digest. After drafting #975, I'm super excited about the simplicity and flexibility these changes manifest. And, it is amazingly simple to do the refactoring. As soon as I can, I will generate a mockup of a test preview for the refactored slider test plan.

@mcking65
Copy link
Contributor Author

mcking65 commented Aug 9, 2023

@jscholes

Per our discussion this morning, I'm changing the command settings to json. However, instead of making another json file, it appears to me that it would be very beneficial to use the existing support.json where keys for the ATs are already defined.

I would like your recommendation for the syntax we should use in the CSV file for referring to a specific setting for a specific screen reader. For example, what should be the syntax for telling the build script to look up browse mode for NVDA. Here is a relevant snippet from the support.json that I just pushed to the slider-refactor branch used by PR #975.

"ats": [
    {
      "name": "JAWS",
      "key": "jaws",
      "settings": [
        {"name": "VIRTUAL_CURSOR", "text": "virtual cursor active", "instructions": "Verify the Virtual Cursor is active by pressing Alt+Delete. If it is not, exit Forms Mode to activate the Virtual Cursor by pressing Escape."},
        {"name": "PC_CURSOR", "text": "PC cursor active", "instructions": "Verify the PC Cursor is active by pressing Alt+Delete. If it is not, turn off the Virtual Cursor by pressing Insert+Z."}
      ]
    },
    {
      "name": "NVDA",
      "key": "nvda",
      "settings": [
        {"name": "BROWSE_MODE", "text": "browse mode on", "instructions": "If NVDA made the focus mode sound when the test page loaded, press Insert+Space to turn browse mode on."},
        {"name": "FOCUS_MODE", "text": "focus mode on", "instructions": "If NVDA did not make the focus mode sound when the test page loaded, press Insert+Space to turn focus mode on."}
      ]
    },
    {
      "name": "VoiceOver for macOS",
      "key": "voiceover_macos",
      "settings": [
        {"name": "QUICK_NAV_ON", "text": "quick nav on", "instructions": "Simultaneously press left and right arrow keys. If VoiceOver says 'quick nav off', press left and right arrows again."},
        {"name": "QUICK_NAV_OFF", "text": "quick nav off", "instructions": "Simultaneously press left and right arrow keys. If VoiceOver says 'quick nav on', press left and right arrows again."}
      ]
    }
  ],

Perhaps no complex syntax is necessary. It may be adequate to simply specify only the value of the name property of an object in the settings array for the relevant AT. The build script can:

  • assume that it needs to use the "ats" array for all the lookups.
  • get the AT key from the commands.csv file name.
  • assume that it needs to use the "settings" array to lookup settings.
  • Assume that the value provided in the settings column of the commands.csv is a value of the name property of an object in the settings array.

Any thoughts on this?

@mcking65
Copy link
Contributor Author

mcking65 commented Aug 9, 2023

@jscholes, I have finished updating the specs to reflect the suggestions you made today:

  • Replace multiple assertion columns in tests.csv with a single column that specifies assertion IDs.
  • Added specification for an assertions.csv file.
  • Specified JSON instead of MJS for command settings.
  • In the AT_KEY-command.csv files, removed the title column.
  • In the AT_KEY-command.csv files, Changed the invalidAssertions column to an assertionExceptions column and changed to syntax that supports overriding the priority with any assertion and specified that setting priority to 0 removes the assertion from the test for that command.

In addition, inspired by your suggestion to change settings to JSON, I revised the specification for assertion tokens to greatly simplify the syntax used to write assertions with tokens. This change should also simplify the build script. I added an assertionTokens array to the objects in the ats array in support.json.

@mcking65
Copy link
Contributor Author

mcking65 commented Aug 9, 2023

@jscholes

I ran into one more issue. there are problems with using a colon character to separate assertion priority and assertion number in the assertionExceptions column of the AT_KEY-commands.csv file. If you specify a value of something like 0:7, when you open the file in Excel, it will interpret that value as a timestamp. So, if you save in Excel, the values are messed up. The only way to avoid that is to inclue quote characters in the column, but then those have to be escaped when saving as CSV.

It seems to me that the simplest solution is to use a different character that will ensure any spreadsheet program will leave the value as a string. I decided to use the vertical bar or pipe character since we were already using it in the instructions field as a separator.

I've now updated all the documentation and PR #975 to incorporate all the changes we have discussed plus this one.

I also added more documentation regarding the precedence of assertion priorities since there are three places they can be specified -- assertions.csv, tests.csv, and AT_KEY-commands.csv. In general, I think it is rare they will be used outside assertions.csv except for when removing an assertion from a specific command by setting its priority to 0. Nevertheless, I specified an order of precedence in the event a value appears in all three places.

You might also notice in support.json that I included a "instructions" property for each setting in the settings array. The value for instructions is an array where each entry is a separate step in the instructions. My expectation is that each entry in the array will be a list item when displaying those instructions for each command. I have also made editorial changes to the instructions; they are different from what we currently have in the test runner.

I will next be working on the mockup and specs for the test plan preview and test runner.

@mcking65
Copy link
Contributor Author

mcking65 commented Aug 9, 2023

@jscholes

As I'm working on the mockup of the preview, I am looking at a couple more changes:

  1. Getting all the instructions shown in the plan preview and test runner into support.json, so the builder is pulling all its information from either the test plan files or support.json. For example, the instructions for configuring to default settings. This way, we could eventually have the URLs for help point to a page that is specific to each AT.
  2. We need to make the plan preview and runner getting info from the same place; they have odd differences now. This is another reason for putting more info in support.json.
  3. Moving the description of the setup script into a separate scripts.csv. This is to eliminate one more string that gets unnecessarily repeated. We can use the script file name as the correlation key.
    Any thoughts?

@mcking65
Copy link
Contributor Author

mcking65 commented Aug 9, 2023

@jscholes

Considering the other changes we are making, this might actually be the least disruptive time to adopt your preferred solution for specifying the strings that represent keyboard commands. Could you please describe what you would like instead of keys.mjs. Assuming you want JSON, how would you like it structured?

@mcking65
Copy link
Contributor Author

@jscholes

I have added a presentationNumber column to the spec for tests.csv to control the sequence of tests in the test runner and reports.

I also specified that assertions should be presented in the order they are specified in the assertions column of tests.csv.

Do you think we should have a similar sequencing number for the order of commands in a test?

@mcking65
Copy link
Contributor Author

@jscholes

I just made several more changes:

  • Moved refs to assertions.csv. This already proved valuable because I noticed that some tests were missing refs. Now, we are normally going to have one ref per assertion, so this removes a big opportunity for errors.
  • Changed testId and assertionId values to strings.
  • Moved setupScriptDescriptions to a separate scripts.csv file.
  • Went back to colon characters for specifying assertion priorities. Since assertionIds are no longer numbers, the colon is no longer a problem.
  • Removed the option to separate instructions into multiple list items. The way we write tests now, this is no longer necessary, and it is a real problem to write the instructions in the preview and runner if it can be multiple list items. I think it is fine if it is multiple sentences if absolutely required. That will work fine in the runner presentation.
  • Swapped the order of the command and settings columns in the AT_KEY-commands.csv files.

The files are getting more and more readable and easier and easier to create. Have a look at the CSV files in PR #975.

@mcking65
Copy link
Contributor Author

mcking65 commented Aug 11, 2023

@jscholes @IsaDC

Here are outstanding questions to answer ASAP, e.g., in the next couple of days ... or sooner:

  1. How should JSON for key commands be structured?
  2. Should we add a presentationNumber to command CSV files that can be used when a single test has multiple commands?
  3. Do you strongly prefer comma or space separated tokens in the columns where we support multiple tokens? These include the assertions in tests.csv, command, settings, and assertionExceptions in commands.csv, and refs in assertions.csv. I am personally starting to hate the comma separation. It is definitely harder to edit and read, and I don't see any value in the commas. In the V1 format, there was a mix of space and comma separated values. I have made them all comma separated, but am now leaning strongly toward going to space separation. It would help in situations where humans read and edit CSV files.

@jscholes
Copy link
Contributor

@mcking65 Planning to direct some serious attention at this today, but I think I can answer #3, about token separation, straight away: let's go with spaces. CSV within CSV is a pain.

@mcking65
Copy link
Contributor Author

@jscholes wrote:

I think I can answer #3, about token separation, straight away: let's go with spaces. CSV within CSV is a pain.

OK thank you! That's agood point, space separation it is. I will update the wiki and PR 975 later today.

@mcking65
Copy link
Contributor Author

@jscholes @IsaDC

I just thought of one more awesome benefit of separating assertions into their own file.

The assertions are currently written in the form:

The role button is conveyed.

This is pretty understandable in most contexts, but not as clear as it could be. Here's a proposal for how we could make the runner and reports easier to understand by making assertion wording for each.

In the reports, when w3c/aria-at-app#733 is done, we will have tables with rows that have columns for priority, assertion, and verdict. There is one table for each command. So, a row could contain:

Priority Assertion Verdict
MUST The role button is conveyed Passed

This table would read much more nicely if we had another form of the assertion wording like this:

Priority Assertion Verdict
MUST Convey the role button Passed

And, it could be even easier to read if it were constructed like this:

Priority and Assertion Verdict
MUST convey the role button Passed

With very little work, in assertions.csv, we could have two columns for the assertion in each row: one for assertionQuestion and one for assertionPhrase. We could have a question form for the runner:

assertionQuestion = Was the role button conveyed?

assertionPhrase = convey the role button

Now that we are moving to a Yes/No radio group in #961, the question form of the assertion could make the runner even easier to understand. The phrase form of the assertion would reduce the number of words in the reports and make them even easier to consume.

@jscholes
Copy link
Contributor

@mcking65

How should JSON for key commands be structured?

I've written up a proposal for this in #976, let me know if you have questions, it doesn't make sense, you hate it, etc.

@mcking65
Copy link
Contributor Author

@jscholes @IsaDC

This is what the assertions.csv file would look like with my above suggestion. I separated out priority, I thought that it would be a bad idea to combine priority if the assertion wording has multiple representations. This is a straightforward conversion from current wording with some regular expressions.

The following table is what I currently have in the #975 branch. I have not changed the format specification on the wiki. I think this could really pay off. I just want to be sure you don't have any objections.

Slider assertions.csv:

assertionId priority assertionPhrase assertionQuestion refs
ROLE 1 Convey role 'slider' Was role 'slider' conveyed? slider
NAME 1 Convey name 'Red' Was name 'Red' conveyed? aria-labelledby
VALUE_128 1 Convey value '128' Was value '128' conveyed? aria-valuenow
ORIENTATION 1 Convey orientation 'horizontal' Was orientation 'horizontal' conveyed? aria-orientation
MIN_VALUE 2 Convey minimum value '0' Was minimum value '0' conveyed? aria-valuemin
MAX_VALUE 2 Convey maximum value '255' Was maximum value '255' conveyed? aria-valuemax
INTERACTION_ON 2 Switch from reading mode to interaction mode|Switch from {READING_MODE} to {INTERACTION_MODE} Did AT switch from reading mode to interaction mode?|Did {AT} switch from {READING_MODE} to {INTERACTION_MODE}?
VALUE_129 1 Convey value '129' Was value '129' conveyed? aria-valuenow
VALUE_127 1 Convey value '127' Was value '127' conveyed? aria-valuenow
VALUE_138 1 Convey value '138' Was value '138' conveyed? aria-valuenow
VALUE_118 1 Convey value '118' Was value '118' conveyed? aria-valuenow
VALUE_0 1 Convey value '0' Was value '0' conveyed? aria-valuenow
VALUE_255 1 Convey value '255' Was value '255' conveyed? aria-valuenow

@jscholes
Copy link
Contributor

@mcking65 I love the phrased form of assertions that can be easily combined with priority strings. However, you wrote:

Now that we are moving to a Yes/No radio group in #961, the question form of the assertion could make the runner even easier to understand.

I'm sorry to only be picking up on this now, but I was under the impression that we would be switching to checkboxes rather than binary radio groups. When the user only has two choices of yes/no, checkboxes seem the more appropriate form control by far, particularly in the context of human testing on this project where there are many assertions on one page, and where every small ounce of inefficiency can scale.

Granted, a radio group takes up one tab-stop just like a checkbox. But when navigating with a screen reader cursor, moving past a single checkbox is easier and quicker than moving through two radio buttons. If we multiply this by, say, five assertions across five commands, that would be 25 checkboxes vs 50 radios.

In #969, the rationale for radios vs checkboxes is:

Use radios instead of checkbox for pass/fail so that the default state is that none of the questions are answered. The supports more robust form validation.

I don't understand what this means, i.e. what is the difference between having a collection of unchecked checkboxes vs a collection of radio groups with no default choice? Meanwhile, I was also hoping that switching to checkboxes would allow us to eradicate the assertions tables for testers, which add a lot of verbosity and confusion. We could just have a <fieldset> with the checkboxes in it.

I feel very strongly that radios are not an optimal direction, so let's discuss this more on Tuesday. However, assuming for the rest of this comment that checkboxes are adopted:

  • Checkboxes are declarative form controls, and hence their labels should also be worded as declarations. I.e.

    Role 'Slider' is conveyed checkbox not checked

    ... is easier to understand than:

    Was the role 'slider' conveyed? checkbox not checked

  • The declarative form of assertions is easier to scan, because there is no repeated prefix.

@jscholes
Copy link
Contributor

@mcking65 Having read the wording around checkboxes vs radios again, I understand now what you're driving at. With checkboxes, having the box unchecked is the same as indicating a failure. But having two unchecked radios does not indicate anything, and is simply an invalid state.

Despite that, I think the advantages of checkboxes for human testers and admins outweighs the data validation concern by a significant margin. Perhaps we can build some failsafes into the validation, e.g.:

  • If a tester tries to submit the form having not checked a single box, we flag that to them and make sure it is what they intended to do.
  • If a tester leaves all assertions unchecked for a particular command, but indicated that the screen reader did respond to the command and provided an output string, we ask them if they're sure.

I don't know if these are good or feasible ideas. But I do know that I have an image in my head of a hugely simplified form for testers that will accelerate their efforts, and make the editing of results far less error-prone for test admins.

@jscholes
Copy link
Contributor

@mcking65 Also, just to say: catching unintentional errors in tester input is what the conflict system is designed for. I think we should let it do its job in the name of presenting a nicer UI to testers.

@mcking65
Copy link
Contributor Author

mcking65 commented Aug 13, 2023

@jscholes wrote:

Having read the wording around checkboxes vs radios again, I understand now what you're driving at. With checkboxes, having the box unchecked is the same as indicating a failure. But having two unchecked radios does not indicate anything, and is simply an invalid state.

Exactly. I was considering the value of more robust form validation. The goal of such validation would be ensuring that the tester did not accidentally overlook any commands or assertions. Since there will be many more commands per test, the likelihood of completely overlooking one is more likely.

Despite that, I think the advantages of checkboxes for human testers and admins outweighs the data validation concern by a significant margin. Perhaps we can build some failsafes into the validation, e.g.:

  • If a tester tries to submit the form having not checked a single box, we flag that to them and make sure it is what they intended to do.
  • If a tester leaves all assertions unchecked for a particular command, but indicated that the screen reader did respond to the command and provided an output string, we ask them if they're sure.

I see two mitigating factors that help support your case for checkboxes:

  1. Not checking a box is a failure. Failures get more scrutiny.
  2. We can help ensure a tester does not overlook a command by implementing Support reporting that an AT did not respond to a command #973. If the AT response input is empty, the checkbox that confirms the AT did not respond would have to be checked. If it is checked, all the assertion checkboxes would be disabled. If the AT response field is not empty and 0 assertions are checked, then we know that the tester did not skip the command and it is extremely likely that the tester intends to state that all assertions failed.

Given it is rare that all assertions fail for a command (not sure if it has happened once yet), your second suggestion that the form confirm that the tester intends to state that all assertions failed for command X would not add any observable inertia and thus could be a reasonable measure to further ensure that the form is complete.

I don't know if these are good or feasible ideas. But I do know that I have an image in my head of a hugely simplified form for testers that will accelerate their efforts, and make the editing of results far less error-prone for test admins.

OK. I made myself a simple mockup of some questions comparing radio and checkbox experiences, and I understand why you believe so strongly in the efficiency proposition of a checkbox-based experience.

I will change the assertions.csv to have a column for "assertionStatement" to use in the runner and "assertionPhrase" to use in reports. I will also update related issues, including #969, #973, and w3c/aria-at-app#738.

@mcking65 mcking65 changed the title Change test format/build to enable variable AT setting, command, and assertion mappings for a test Design V2 of test format to enable variable AT setting, command, and assertion mappings for a test Aug 14, 2023
@mcking65 mcking65 added enhancement New feature or request documentation Related to documentation about the ARIA-AT project or its deliverables labels Aug 14, 2023
@mcking65 mcking65 self-assigned this Aug 14, 2023
@jscholes
Copy link
Contributor

@mcking65 This is all looking good. Regarding the presentation number column for tests and commands, is it intended to assist in PR reviews to explicitly demonstrate how command and/or test ordering has changed? Otherwise, it seems like we would be served equally well by just moving rows around.

@mcking65
Copy link
Contributor Author

@jscholes wrote:

Regarding the presentation number column for tests and commands, is it intended to assist in PR reviews to explicitly demonstrate how command and/or test ordering has changed? Otherwise, it seems like we would be served equally well by just moving rows around.

I was wondering whether the order of rows is sufficient. I am leaning toward the explicit declaration because it:

  • reduces ambiguity in the data regardless of method of consumption.
  • Allows for better change tracking as you noted.
  • Is a simple addition to make now but would be very disruptive later if other reasons for it arise.

@mcking65
Copy link
Contributor Author

@jscholes, I've updated the V2 wiki page with the following changes, most of which we discussed above.

In assertions.csv:

  • Separated assertion priority into its own column.
  • Replaced assertion column with assertionStatement and assertionPhrase columns.
  • changed name of refs column torefIds

In AT_KEY-commands.csv:

  • Added presentationNumber column.

In references.csv:

  • Added linkText column
  • Added developmentDocumentation refId for the GitHub issue where development of the test plan is tracked.
    With the exception of replacing keys.mjs with commands.json, I believe the V2 format spec is complete. I will give more attention to the commands.json spec you wrote before our meeting tomorrow. Ideally, we can finalize that tomorrow and integrate it.

I have updated PR #975 that refactors the slider test plan with all the above changes.

I have also finished a mockup of builder output and specified how the builder needs to change in #977. Please have a look at that mockup and spec. The mockup is in an attached zip.

@jscholes
Copy link
Contributor

jscholes commented Aug 15, 2023

@mcking65

I am leaning toward the explicit declaration ...

Fair enough, I'm onboard with this idea. Can we therefore consider, as far as the build scripts are concerned, the ordering of non-header rows to be irrelevant? E.g. the following is a valid, if odd, commands file that I'm only using to demonstrate the point:

testId,command,settings,assertionExceptions,presentationNumber
NAV_FORWARDS,DOWN DOWN,VIRTUAL_CURSOR,0:INTERACTION_ON,1
NAV_BACKWARDS,SHIFT_TAB,VIRTUAL_CURSOR,,1
NAV_FORWARDS,TAB,VIRTUAL_CURSOR,,2

This way, we should be able to shuffle rows around so that their ordering matches the presentation order for readability, but not have the position of a row be used for sorting or any other purpose (including in hashing).

@mcking65
Copy link
Contributor Author

@jscholes wrote:

Can we therefore consider, as far as the build scripts are concerned, the ordering of non-header rows to be irrelevant?

Definitely

This way, we should be able to shuffle rows around so that their ordering matches the presentation order for readability, but not have the position of a row be used for sorting or any other purpose (including in hashing).

Right except that I don't know how/when the hashing is done. Ideally it would be done on an object built from the files rather than the file itself.

@mcking65
Copy link
Contributor Author

mcking65 commented Sep 5, 2023

@jscholes

I have updated the V2 format specification with the following changes.

Revised testId and assertionId descriptions:

  • Only include characters a-z, 0-9, and dash ("-").
  • Be generated from the test title by removing extraneous words, shortening some common words (e.g., navigate to nav), converting to lowercase, capitalizing the first letter of the second and subsequent words, and removing spaces and punctuation.
  • Be generated from the assertionStatement by removing extraneous words (e.g., 'conveyed'), shortening some common words, converting to lowercase, capitalizing the first letter of the second and subsequent words, and removing spaces and punctuation.

Revised presentationNumber specifications for tests and commands:

  • A positive number that controls the order of presentation of tests in the test runner and reports. By default, tests will be presented in numerical ascending order using the values in this column, i.e., the values are Number, not String, primatives when sorted.
  • A positive number that controls the order of presentation of commands in the test runner and reports. By default, commands within a test will be presented in numerical ascending order using the values in this column, i.e., the values are Number, not String, primatives when sorted.

Revised requirements related to refId values as described in the assertion and references sections:

  • The refId values designate references to the ARIA or HTML-AAM specification covered by the assertion. Typically, an assertion should cover only one ARIA or HTML feature. Some assertions will cover a behavior that is not described by any specification, so this cell will be blank in those rows.
  • The convention for ARIA attributes is that the refId is equivalent to the ARIA attribute, e.g., slider or aria-orientation. For HTML-AAM mappings, it is a good idea to avoid ambiguity by using a refId that includes the string html, e.g., htmlButton, or htmlLink.

Defined new values for reference types:

  • metadata: Indicates the refId is for information that applies to all tests in the plan.
  • aria: Indicates that the refId is for an ARIA attribute specification.
  • htmlAam: Indicates the refId is for an HTML element mapping specification.

Revised reference value specification for aria and htmlAam reference types.

Added requirements for linkText.

Added a section describing how link text and href values are calculated for reference links.

Updated validation rules.

@mcking65
Copy link
Contributor Author

mcking65 commented Feb 7, 2024

Test Format Definition V2 is finalized and implemented in the aria-at app.

@mcking65 mcking65 closed this as completed Feb 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Related to documentation about the ARIA-AT project or its deliverables enhancement New feature or request
Projects
Development

No branches or pull requests

2 participants