-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Share code with a11y-outline #303
Comments
Good to virtually meet you, and thanks for your suggestion. a11y-outline is the inspiration for Landmarks’ issue #156 :-). I had a quick look at aria-api; abstracting over the source of an element’s semantics is very neat. I’ve been thinking of factoring out the main tests for Landmarks so that they could be used as a general test suite for any code that finds landmarks (and, later, headings). This could be extensions like ours, or other assistive technologies such as screen-readers. Could also incorporate some of Landmarks’ benchmarking code too. (I hadn’t yet factored out the tests because I’m mindful of the need to change the test case data structure to a tree, from the current list format, in order to accommodate headings.) Perhaps it would be a good idea for me to separate out that test suite, then we could use it to help us combine the core code? There is another architectural matter I’ve known about for some time, but not had the time to explore. Since Landmarks started using mutation observers to track changes to dynamic pages, I have wondered if it would be more efficient to support scanning parts of pages for landmarks, rather than the whole page, on each mutation. This would require completely reworking the data structure for found landmarks (and in future headings). But, it seems like it could bring significant performance benefits. I’m not completely certain on this, because it would require matching the tree of found landmarks/headings to the part of the DOM that changed, which would involve potentially many containment tests, as well as slightly increased memory requirements. Scans actually tend to be very quick already (and avoided where possible), but for very complex pages, and ones that change a lot, this might be more appropriate. Seems like something for the future, though :-). |
Nice to hear you know of a11y-outline. I get very little feedback, so it's great to hear you even took inspiration from it! So far I mostly focused on tests for the accessible name calculation. So getting more tests for the landmark part would be great. Anyway, that sounds like a reasonable first step. Let me know how I can help. Concerning the mutation observers, I don't have too much experience with that. Performance has not been a big concern for me yet. But it sounds like an interesting idea. I would be interested to help with that, too. |
I understand what you mean regarding feedback; sometimes I am curious about how people are getting on with Landmarks. However, I have been lucky and got some helpful feedback digitally and in person - some reviews, which is great, and several constructive issue reports, which is ace. I hope you get more feedback. If you’ve not seen it, you might be interested in reading an article that covers both a11y-outline and Landmarks I’ve been exploring some ideas relating to testing, and I think I’ve got a reasonable starting point for a test suite that would cover aria-api and Landmarks, based on Landmarks’ current tests; I’ll try to get an initial version published within a week. Then we could perhaps refine it, and later I would love to extend it to other assistive technologies. In the short term, I think the test suite will help us check that both our codebases are spec-compliant (assuming I’ve got the tests right—having your input on that would be great too), and if there are any performance improvements we could make. aria-api seems really nice (especially now I’ve actually been using it) and sharing code is a good idea, so I’d like to use it. There are some upcoming design decisions for Landmarks that we might need to discuss first, but working together sounds like a great idea. I’ll post some further thoughts on the design questions after I’ve got an initial release of the test suite to show you :-). |
I’ve not met my intended deadline of a week to factor out the test suite, but it’s not far off :-). I’m writing the more general test harness in a test-first manner and, whilst it will speed up future development loads, I had to do a bit of research into test libraries and if/how I might have some tests that read the file system (managed to avoid that for all but, I think, a couple of tests). I’m AFK this weekend, but hope to have something to show you earlyish next week. |
Is there a branch or something I could look at? I could try to include those tests in aria-api. That way I might be able to give feedback. |
This is a basic version of the test suite from Landmarks that has been designed to work with other tools, such as a11y-outline/aria-api. The fixtures and expectations come from the Landmarks browser extension, but the code was written from scratch using TDD. There are lots more features to come, though this initial version is used as a drop-in replacement for the Landmarks extension test suite. For more information, consult matatk/landmarks#303
I have an update :-). I've factored out the Landmarks extension's test suite into a separate project, and made it so that the Landmarks extension's code now uses that project to run the landmarks-finding tests. If you'd like to run the test suite, try this:
Internally the new test suite works by creating a node-tap test function for each test fixture. The fixtures are turned into DOMs using JSDOM. The test function runs the landmarks-scanning code against that fixture, and checks the expectation to see if it matches. It uses node-tap to output the results, which includes a diff if the test is unsuccessful. I'm working on making the new project either depend on or somehow include Landmarks, a11y-outline and any other tools so that they can be tested "out-of-the-box", but for now the tests must be run from a checked-out Landmarks repo. (I think that using git submodules to ensure that the tests are being run on a known version of Landmarks or a11y-outline might be a nice idea.) The test fixtures for the new project aim to be comprehensive and use a reasonable general data structure (i.e. tree). Currently they only cover landmarks, but I plan to add headings and articles soon. Landmarks doesn't use a tree internally (it uses an array, and each found landmark is given a "depth"), whereas a11y-outline uses a tree, but each object that represents a landmark has slightly different property names, and some properties are different (it doesn't need to compute a selector, but it does have the "href" field). Therefore I created several "converter" functions that change the "standard" test expectation data into the formats recognised by Landmarks and (if I've done it correctly) a11y-outline. In order to test a11y-outline, I had to re-structure the code a bit so that the tree-creating code is in a separate module, so that the tree can be returned either to the main a11y-outline extension code or a new test script that simply runs a11y-outline's landmarks-scanning code against the test fixtures in the new project. Unfortunately I haven't fully completed this work, but wanted to send you what I have so far as I know you'd like to try it. I have got quite far, but I'm a bit stuck at the moment, because both a11y-outline and aria-api's code assumes the existence of the globals There seemed to be two possible solutions to this, and then I found another one...
There are some other things that the new project can do, or will be able to shortly...
The documentation needs a lot of work and I'd like to set up a GitHub Pages site on which the HTML results could be stored for different tools. |
Wow, that sounds like a lot of work. Thanks a lot already! I did not have time to look into it properly, but it sounds like the main issue is with providing a single test runner that works for both implementations. Wouldn't it be much simpler to just share the fixtures and expectations and leave the execution to each project? That would also have the benefit that each project would just need to execute one test scripts instead of two (its own and page-structural-semantics-scanner-tests). That's the approach I took with accname-test. Can you give a bit of context on why you chose to use JSDOM? So far I use mocha-headless-chrome. I always shyed away from re-implementations of the DOM because I had a vague feeling that something could be missing, so I have no real experience with it. What are the pros and cons from your perspective? |
Thanks for your very quick reply and constructive feedback; it's given me some good stuff to think about. The main reason for providing a test runner was that I thought (read: assumed :-)) that, given the code that scans for landmarks (or headings, or articles) is going to be relatively simple, and similar across projects, it would be possible to have a simple test-running API, and that would bring some benefits: the test code could be pooled into one project; projects using it would automatically get reporting of test results with diffs when the returned results were not expected, and the generation of HTML results could be automated (so it'd be easy to compare scanners). I had seen accname-test but not how it worked—oops :-). It's an elegant approach. You've helped me realise that an alternative might be to provide some smaller utilities that projects could optionally use and still have their choice of test/assertion library and test environment. There's an I thought there'd be a couple of advantages for running the code outside of a browser, at least initially: it'd take less time, with fewer dependencies, to get to the stage of getting test output from both Landmarks and a11y-outline. As I'm hoping to experiment with testing screen-readers (somehow :-)) in future, browser support would be needed at some point. I didn't spot the difference in globals, though funnily enough I had to abstract away from global With respect to JSDOM specifically: I know what you mean about it not being a mainstream browser, though I've been using it as part of Landmarks' test suite for a couple of years and it is solid, as well as quick and light on dependencies. Incidentally, I noticed we both use I've also used Karma and Puppeteer on other projects, and imagine they may be good for browser testing (I'd like to make it work on Firefox as well as Chrome). What's next? I would like you to be able to use the tests as easily as possible, so I am wondering:
P.S. No rush to reply to this :-). |
Today I finally found time to look at this again. I tried to integrate the tests with my existing setup and found some issues: I run the tests in a browser, so node APIs are not available. Instead I use browserify with brfs to bundle everything into a single file. This is not possible if Then I ran into another issue: The tests are executed inside of a host document. My existing tests only provide a HTML fragement that is injected into that document. Your test fixtures on the other hand are complete documents themselves. This is especially important for the "landmark-role-on-body" test. I am not sure what to make of that. I was already able to find some bugs, both in my code and in the tests (I opened separate issues for those). So the shared tests are definitely useful. However, I am not sure yet how to integrate the test cases into my setup. I will think about it some more. |
Thanks very much for your continued feedback and trying this out. I’ll respond to your bug reports as soon as I’m able. I’ve been working on several things (they’re in the page-structural-semantics-scanner-tests repo but not yet released):
I hadn’t considered the tests running the other way around - with the test suite running inside a single page in a browser. This makes a lot of things make sense now :-). The brfs documentation doesn’t mention Apart from the two fixtures that have I've written such a script and put the results in the "try-combining-all-fixtures-expectations" branch. You can find the combined files in the "combined/" directory in the repo, and run The output isn't so neat at the moment, but (apart from the two tests it can't support) I'm wondering if this approach may be of help? |
That is exacelt what I need. So now the question is: Should the combination be in the page-structural-semantics-scanner-tests repository or in mine? I lean towards having a separate repository with only the fixtures and expectations and separate ones that provide the test code. But I will leave that to you. Thanks for all the work! |
Glad this helps :-). I think you’re right; I can foresee some code moving back to the Landmarks repo—though I am working on some smaller-scale code to help with collecting and collating results into JSON and then HTML. Maybe later I’ll try doing some benchmarking/profiling, but that really depends on where the scanner is run, so will see about that. It is a good idea to keep dependencies as light as possible. I have a few questions for you about the combined fixtures and expectations files:
I’ve been thinking about the issues you reported and will get back to you as soon as I’m able. |
I think a fragment is fine.
I would leave that to the authors.
In my specific case that would make things easier. However, It would also mean that the expectations can only be used from JavaScript. If for example someone wanted to implement the same in C or Javaor whatever, they would not be able to reuse the test cases. So I would recommend against that. |
You may've noticed that my speed on this has slowed a bit (though we have closed some issues in the tests repo: no. 3, no. 1 and no. 2)—I've got a lot on at the moment (and will for the next month or two), but am very keen to continue working on this and applying the tests to other tools. I still agree that the tests repo should be simplified as much as possible, though I also like that it can be used as an out-of-the-box testing solution if that's what people would like. When I've implemented what I felt was the "minimum viable product" it should be apparent what can be moved back out and into the tools that make use of it. The test repo already seems useful for landmark tests, and thanks to your help it's fixed a bug in Landmarks that I'd missed :-). I'd like to add headings soon too (and maybe articles, as they are semantically separate). I'm working on some simple code that should allow results from different tools to be collated and compared—if I can figure out how to interface with those other tools :-). This thread started due to your suggestion of us sharing code, and I am still looking forward to us doing that, too :-). |
Hi,
I maintain a very similar extension called a11y-outline. I split the underlying code into a separate library called aria-api. As these projects are very similar in scope, I think we could share some code or at least learn from each other. If you think this is a good idea, my proposal would be to use aria-api in both extensions and work on that together.
The text was updated successfully, but these errors were encountered: