-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mapping abstract test instructions to concrete, screen-reader-specific instructions #14
Comments
I've taken a stab at creating a document that maps abstract test instructions to screen-reader-specific concrete instructions, starting with the Checkbox example. Here is the document in Excel format: What the document doesThe document gives a list of concrete, screen-reader-specific commands for testers to execute for each test for the Checkbox example. So, for example, for "Read checkbox", it shows what concrete commands to press for JAWS, NVDA and VoiceOver in the different interaction modes. Structure of the documentThe document has three tabs:
Each tab contains one table that gives specific testing commands for that particular screen reader, for each screen reader mode, for each Checkbox test. Where this data comes fromI've put together these commands from:
To avoid mistakes, I've also tested these commands in JAWS, NVDA and VoiceOver today. Note: @Yohta89 's spreadsheets contain some specific test instructions for menubar. So they will come in handy for when we do the same for menubar. Next stepsI've noticed that putting these instructions together does indeed take time! I think that it'd be good to tackle this as several people. And I also want to make sure that the commands are accurate. Before I and/or anyone else extends this work for menubar, I think that it'll be useful to review and feedback on the format and content of this document. On top of that review, here are questions that have already come up: Questions unrelated to any screen reader
JAWS-related questions
VoiceOver-related questions
|
Thank you so much for this work and detailed notes! Great work, and great questions. Let's discuss on the next call. |
JF, thank you for this comprehensive work! Two quick responses to the questions regarding JAWS. 2. In Matt's 'Test Simplification Exploration' spreadsheet from early July,
|
Questions unrelated to any screen reader
JAWS-related questions
VoiceOver-related questions
|
Thanks @Yohta89 and @mfairchild365. Reading your comments I realised that I had confused VoiceOver's QuickNav with 'Control Option Lock'. The link you shared Michael was useful. I do support the idea of testing with a smaller set of commands, at least to start with. Heads up re. my limited availability for our Wednesday calls at the momentApologies for missing last week's call. I've just started work with a new client and a meeting ran over. I'm only about 60% confident I'll manage to join tomorrow, and I won't manage next week. |
Does the format of the document allow for programmatic parsing of the commands? (see Excel file attached in the second comment) I'm imagining that the way that commands are listed might make them hard for a script to parse:
@spectranaut and @mfairchild365: is that an issue? I'm not quite sure how we could store the commands in a way that is easier for a script to parse, while also making it easy to review for all of us. I initially thought of putting the commands together in JSON format directly, but Matt rightly pointed out that that would make it harder for us to review the data. |
Thanks all for your feedback last week. Updates since last weekHere's what I've done since last week:
Link to the testing command filesTesting commands for checkboxTesting commands for menubarNext steps
Note: I'm not sure I'll be able to join our meeting tomorrow Nov 13. |
I've now double-checked the testing commands with NVDA and JAWS and made some small updates to the documents that the links above point to. |
I've been thinking about the reading the checkbox and checkbox grouping cases. I think the test harness needs to be redesigned given what JF has revealed, which is that there are two categories of commands in each of these cases -- where the categories are "reading the checkbox by navigating to it" and "reading the checkbox after the cursor is already on it". Right now it is only possible to have one user instruction that describes both cases. I want to suggest something like this:
This would result in the same UI as in #15 but instead you would see this in the test instructions broken out a bit more, maybe something like this:
I'm worried that this makes the test writing complicated to explain or learn. Maybe these two categories should have two different test files instead, or maybe we should just leave the test file as it is, with slightly confusing instructions (specifically, it currently says |
This work is complete, the menubar and checkbox tests have been written! :) |
This first issue serves as a summary of the thread.
Latest testing command files
Testing commands for checkbox
Testing commands for menubar
Outstanding decisions
Next steps
The text was updated successfully, but these errors were encountered: