Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration tests refactoring #2925

Open
IgnatBeresnev opened this issue Mar 16, 2023 · 4 comments
Open

Integration tests refactoring #2925

IgnatBeresnev opened this issue Mar 16, 2023 · 4 comments
Assignees
Labels
epic A large body of work that is broken down into smaller issues tech-debt A technical issue that is not observable by the users, but improves maintainers quality of life

Comments

@IgnatBeresnev
Copy link
Member

IgnatBeresnev commented Mar 16, 2023

Dokka's integration tests run Dokka on real projects using various versions of Gradle, Kotlin and Android, and make sure there are no abnormalities, all links are present, everything is resolved and is able to be built, etc. This is extremely useful, and the same behaviour should be preserved.

Additionally, the integration tests are used for generating and publishing live demos of documentation to S3, which is very useful for reviewing UI/UX changes. This feature should be preserved as well.


However, the way that the tests are set up now is not optimal, and it's causing problems in various places:

  1. The tests cannot be cached by Gradle as they define incorrect inputs/outputs. More information here.
  2. The tests are lumped together under the same task, so it's more difficult to have unique configuration for a project (more information here).
  3. External test projects, such as kotlinx.coroutines, are imported as git submodules. This is just annoying to maintain and update, and not obvious to someone encountering it for the first time. Maybe these projects could be added as dependencies, more information here
  4. (to be continued)

The outlined problems should be researched and addressed if possible.

@IgnatBeresnev IgnatBeresnev added the tech-debt A technical issue that is not observable by the users, but improves maintainers quality of life label Mar 16, 2023
@aSemy
Copy link
Contributor

aSemy commented Mar 28, 2023

Improving the integration tests is also connected to improving Dokka's Gradle build files #2703, since the integration tests break project isolation to publish Dokka to Maven Local

fun Task.dependsOnMavenLocalPublication() {
project.rootProject.allprojects.forEach { otherProject ->
otherProject.invokeWhenEvaluated { evaluatedProject ->
evaluatedProject.tasks.findByName("publishToMavenLocal")?.let { publishingTask ->
this.dependsOn(publishingTask)
}
}
}
}

I'm working on a Gradle Plugin for solving this problem, (though it's still awaiting approval in the Plugin Portal). I actually manually tested it using Dokka! It required a significant amount of changes though, so I'll make a PR as a demonstration, but I wanted to break it up into smaller pieces.

@aSemy
Copy link
Contributor

aSemy commented Apr 5, 2023

There was a discussion on Gradle Slack this week and https://github.com/gradle/exemplar was recommended for running integration tests. Although I find it pretty tough to understand how to use it, I think it would be worth investigating.

@IgnatBeresnev
Copy link
Member Author

IgnatBeresnev commented Dec 5, 2023

Grooming notes

Introduction

We have some integration tests already, but they've proven to be somewhat difficult to maintain, to extend, and they have some problems in their core that are getting in the way of other related things. These tests were a good start, but since their inception we've figured out additional use cases, figured out what works and what doesn't, and what's useful or would be nice to have in general.

The hypothesis is that it'll be much easier and faster to start completely from scratch and re-implement integration (e2e) tests based on the specific desires / requirements and our experience, than to try to salvage the existing tests. Moreover, starting from scratch should also be safer, as it'll allow us to have a seamless transition from the old tests to the new, without worrying that the old (primary) tests accidentally break after refactoring.

Types of tests

There are two types of integration / end-to-end tests:

  • Artificial local projects, we write them ourselves to reproduce specific bugs and various common and basic setups.
  • Arbitrary user projects, things like ktor / coroutines / sqldelight.

They could have the same setup (like now), but because they are useful in different ways - it might be easier to separate them, where each type has its own requirements.

Artificial project tests

  • Useful for running parameterized tests to make sure there are no high-level compatibility problems with KGP's / Gradle's API.
  • Allows us to narrow down the scope of the test project, so that we can reproduce a specific setup or a specific problem.
  • Allows us to test compatibility with all supported versions (all the way from Gradle 6.9 to Gradle 8.4, and Kotlin from 1.5 to 1.9 and so on).
  • Helps with asserting known corner cases, but it does not help with finding new corner cases.

Examples of issues it helps find:

Notes:

  • Must be parameterizable.

Arbitrary user projects

  • Useful for testing Dokka in real user projects that have custom build logic, additional buildscript dependencies, specific KGP compatibility flags, and a lot of custom and sometimes non-trivial code.
  • Helps find new breaking corner cases and new convoluted compatibility issues early on, as opposed to verifying the knowns.
  • Helps not only with checking Gradle / KGP compatibility, but also with checking the analysis changes (i.e compiler analysis API and how we use it) and Dokka's engine logic. In a way, these tests cover the lower levels.
  • Provides a much bigger testing surface - it would be impossible to figure out and cover all of these cases manually from the get-go.
  • We use the issues uncovered by these tests to add new cases to our unit and artificial integration tests.

Examples of issues it helps find:

Notes:

  • These tests are also used to preview new frontend (HTML) features / changes in PRs. The integration tests run Dokka, and Dokka produces some HTML pages, which get uploaded to S3 and you can open the result in your browse (see this workflow). While it's not a requirement, it something that is nice to have, but we can figure out a different solution if it can't be addressed with this refactoring.
  • These tests don't need to be parameterizable, it should use whatever KGP / Gradle versions are used by the project itself.
  • Sometimes we want to additionally configure Dokka in these projects, so not only apply it, but also change some of Dokka's settings.

Pain points

Some pain points of the existing tests:

  • User projects are initialized via git submodules, which are very annoying to maintain. They are not obvious, take a while to update to latest versions, adding new projects is not a straightforward process. Ideally, the new tests should not use git submodules.
  • Git patches that are applied in runtime on top of user project submodules are also somewhat difficult to maintain and put restraints on how often we update the user projects, because merge conflicts are not detected automatically, so you have to check out the project, apply the patch manually, verify it's all good or re-generate it, which takes time. If we have 10+ user test projects (which is not unreal), it'll take quite a bit of time to update each one. Ideally, the new tests should not use git patches or simplify the update process.
    • Just an idea: maybe maintaining a separate fork of the user project for testing will be easier.
  • The existing tests work through publishing Dokka to Maven Local, which causes problems with CI (not enough space).
  • There are problems with caching, which is now disabled. The bug to avoid here is when tests are green, but they didn't run the latest Dokka and instead used a cached version. Changes to in dokka-subprojects should mark the tests as not up to date.

General notes / requirements

Things to keep in mind while designing the solution:

  • It should be as easy to add and update test projects as possible, as we want and need to be able to do it often.
  • We have different tests for Gradle, Maven and CLI runners, but the primary goal here is to refactor the Gradle tests. If the Gradle tests cover all areas, including the analysis, the dependencies and various corner cases, then for Maven and CLI we only need basic tests that cover the runner part, which can be done separately and later.
  • There are two types of asserts:
    • Common asserts that either all tests or a subset of tests need. For instance, making sure there are no Error class substrings in signatures, or that private API is not documented. Right now these asserts are somewhat copy-pasted, it would be nice to have a single place for them or to make the copy-paste as little as possible.
    • Specific asserts that only cover specific test projects. For instance, if we want to verify that cinterop types were resolved properly in a Kotlin/Native test project.
  • We need to be able to tests different analysis implementations, for instance the K2 analysis is enabled via a flag.
  • Need to be able to test both the new and the old Gradle plugins, as they will coexist for some time.
  • The versions of KGP / Gradle that are used for tests are now hardcoded, which is fine, but there should be an ability to test the latest -dev versions of KGP and/or RC versions of Gradle from CI builds (so the use of dynamic versions)
  • The current integration tests take 1h+ to run, which is painful in PRs and hogs CI agents. There should be a a differentiation between "full tests" that run everything, and their subset in the form of "smoke tests", that only test specific versions (maybe only the latest) or projects (maybe only 2 out of 10 user projects). Smoke tests are intended to be run for commits in PRs, and full tests - overnight or triggered manually.

Nice to have

  • Ideally, the tests should be runnable and debuggable locally, at least for the artificial projects.
  • Ideally, we need to be able to run different Dokka tasks, so not only dokkaHtml, but also some other/additional ones, so that we can verify other formats or intermediate results.
  • It would be nice to be able to save the HTML output of the integration tests to somewhere, maybe publish it as a TC artifact or publish it to S3/GH pages. So some sort of a post-action, configurable output directory, a hardcoded or other solution is needed.
  • In the future (Develop integration tests that compare the output of different Dokka versions #3142), we want to have integration tests that compare the produced HTML output with some other HTML output (either produced by a different version or hardcoded), so it would be nice if these tests could be re-used in some way to achieve that. It should not be enabled by default though, as some changes are expected and it needs to be controllable / not annoying, and used more for regression testing when updating dependencies / doing refactorings.

@IgnatBeresnev
Copy link
Member Author

First iteration

It makes sense to start with something smaller and approachable, and then build on top of it.

The first iteration can focus on:

  • Initial research / design of the solution.
    • Gradle's exemplar, internal solutions (KGP's tests, community plugin for K2) can be compared and/or used for inspiration.
  • Assess the scope and the complexity of changes. Maybe some preparation work is needed (like something related to publishing or re-doing some build scripts).
  • Parameterized artificial (local) projects only, as they should be easier and will allow us to start compatibility testing against the latest KGP versions.
  • A restricted set of test projects (the rest we'll add later on):
    • A basic Kotlin/JVM project.
    • A basic KMP project (the targets are not that important)
    • A basic multi-module project.
  • Restricted asserts
    • The task is executed successfully, fail otherwise.
    • Some general asserts for all test projects (no unresolved links, no private API documented by default).
    • An example of a project-specific test, just to make sure it's possible to write.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
epic A large body of work that is broken down into smaller issues tech-debt A technical issue that is not observable by the users, but improves maintainers quality of life
Projects
None yet
Development

No branches or pull requests

3 participants