Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] Generator Compatibility Check and Table #503

Open
grokify opened this issue Jul 8, 2018 · 7 comments
Open

[Discussion] Generator Compatibility Check and Table #503

grokify opened this issue Jul 8, 2018 · 7 comments

Comments

@grokify
Copy link
Member

grokify commented Jul 8, 2018

Overview

This is a discussion of some items that I think could increase adoption of OpenAPI Generator for discussion. Two goals include:

  1. Provide API publishers and users a high degree of confidence that an API of interest with a spec is supported by generators of interest. The idea is similar to Browser Compatibility Tables but with a report specifically for a given API. This should include OAuth, requests, responses.
  2. Provide the OpenAPI Generator community with information on gaps of popular features or specific features in popular APIs, which if supported, could increase adoption of OpenAPI Generator for those developer communities.

These can lead to:

  • more API publishers using OpenAPI Generator client SDKs as their official SDKs
  • more API users using OpenAPI Generator client SDKs, vs. creating their own manually

I'm interested in contributing to this given I'm both API publisher and user. As a publisher, I want to support multiple languages with official and community SDKs and it would be nice to easily see how much support there is for my APIs across a set of languages. As a consumer, I work with many different APIs and it would be useful to see each of those for my languages of choice.

What to Build

The following are some items that may be useful to make this happen, going from the end product backwards.

  1. Online Readiness Reporter: this would allow the input of a spec via URL or copy/and paste. In addition to validating the spec, this readiness reporter will present a table of what is supported vs. not supported by selected generators. If everything is supported, then the user is good to go. If not everything is supported, they know the delta and can move ahead with the client with that information.
  2. Generator Compatibility Tables: Build a general OpenAPI Generator version/language support table. This is similar to Browser Compatibility Tables. (e.g. https://caniuse.com/, https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage#Browser_compatibility). Ideally this would be auto-generated from reviewing CI test logs. This would be useful for checking for a specific feature but many API sets are large so the spec-to-compatibility-table report is important.
  3. Test API Spec Enhancements: the compatibility tables could be auto-generated if we had more tests that all generators can use. We would need to match specific tests to features in the compatibility table. One option in this particular test spec is to have the operationId or an x-property indicate which tests are supported. An x-property could support multiple capabilities per endpoint.

Additional Items

  1. End Test Cases / Generator for OpenAPI Non-Supported Features: Some APIs have features that are not well supported in OpenAPI spec. We could come up with alternatives to implementation using x-properties and implement them in clients to increase adoption. At the same time, we could make a case to the OpenAPI Initiative for the importance of certain capabilities given increasing adoption.

Considerations

  1. Lengthening Test Times: This approach would have many more test cases and could lengthen integration test times.

Existing Work

  1. We already have a number of tests that we could organize into a compatibility table and report to see functionality across generators. We would need a set of spec endpoints that could feed a compatibility report, tests against those endpoints, and a report system to consolidate the results.
@jmini
Copy link
Member

jmini commented Jul 9, 2018

This is a very interesting topic... OpenAPI-Generator is doing a lot to ensure no regression but we have nothing that scales.

When I work on an issue, I like:

  • A small OpenAPI spec to reproduce the case.
  • A unit test that illustrate the case.

But sometimes this is not enough.

Because to test that a feature is supported, we need to generate a project and then perform some test on it.

@wing328 once told me that right now, expanding the petstore.yaml or petstore-expanded.yaml is not necessary a solution. It gets too big and because it is shared by all projects, it is difficult to add something that is only supported by few generators


Example: Take the issue #49 (Enum value are not necessary String).

components:
  schemas:
    IntEnum:
      type: integer
      format: int32
      enum:
        - 1
        - 2
        - 3
    LongEnum:
      type: integer
      format: int64
      enum:
        - 20
        - 30
        - 40
    StringEnum:
      type: string
      enum:
        - "c"
        - "b"
        - "a"

This was implemented for Java, Kotlin with #75.
I did not check it, but I imagine that with some other languages, the generated code might have compile errors.

If this new pattern is added to the petstore, the generators for all languages needs to be fixed.
I do not have this kind of knowledge.

How did I test it? I have my own spec that I have used to test the PR.
To ensure no regression, we have only the Unit Tests.

A good integration would be to generate the projects corresponding to the test-spec and to write unit-test that perform tests on the generated enum class.

If we start to create small specification for each small feature (like this one "enum with different types for the values") the number of sample projects will be huge. But this is the solution because for a language:

  • If the small project does not compile => the feature is not supported at all for this language.
  • If the the project compile but the test are not OK => the feature is "Work-In-Progress" for that language
  • If the test are green => the feature is supported for this language.

An other good candidate: "enum is array of array". I have solved it for java, but #105 is opened for other languages. Example spec (OAS2):

definitions:
  SomeObj:
    type: object
    properties:
      myProp:
        type: array
        items:
          type: array
          items:
            type: string
            enum:
              - a
              - b

Again this would mean an other sample for a small spec.


Maybe a solution would be to create an other repo for the report. We would just update it after each release. Or to introduce new spec for new features.
In comparison to the existing samples/ folder we might create only one java client-sample (and not one for each library).

If we detect a regression, we open a new issue and fix it during the next week for the next patch release.

@grokify
Copy link
Member Author

grokify commented Jul 10, 2018

You mention a great example. If an API of interest had an "enum is array of array", as in #105, it would be great to know quickly that this worked in Java but not other languages, especially against a specific API's spec. There may be other things that work in other languages, but not Java.

I think it's important to add more endpoints with API / spec features as the test spec will better represent APIs found in the wild and ultimately make the project more useful, with higher confidence, and out of the box.

When generators don't support features, it is an opportunity to identify areas of improvement to be worked on. It can take quite a bit of effort to build a SDK and find out many features needed don't work by trial and error, and debugging many compile time and run time failures. After the issues are resolved for one generator, it's possible to wonder how many similar issues exist for other generators of interest, especially when working on a developer program where there's a desire to support many different languages where one isn't necessarily an expert.

Many API specs don't use all the features of a spec. When a specific spec was loaded, it could be checked against just the features that the spec needs. The report could provide information on minimal gaps which could then be addressed, listed as known issues, or removed from or modified in the spec for certain SDKs.

To scale the testing process, an approach to break apart the testing is necessary as you mention.

One approach is to have the full suite of tests run separately as mentioned.

Another approach I've been thinking of is to separate the generators into separate libraries, each with their own test suites, that can be run separately but not run together when they are compiled together into a CLI. For example:

  1. each generator as a separate Java library that has it's own test suites against a common spec
  2. the core library tests the interfaces and one client and server implementation
  3. the CLI packages the core library and stable versions of the generators that ran their test suites separately
  4. test reports for each generator are collected and compared

@jmini
Copy link
Member

jmini commented Jul 10, 2018

Another is to separate the generators into separate libraries, each with their own test suites, that can be run separately but not run when stable versions are compiled together.

Yes this was also discussed before. I think it is one of the vision shared by @jimschubert (currently in holidays). My take is that doing a such move requires:

1/ Much more cleaner "separation of concerns" between the layers of the current generators. This is a global architecture question.

I give you one example: For the moment all generators are deriving from DefaultCodegen. Lets assume you split DefaultCodegen somewhere in a core library and the java JavaClientCodegen in an other module, then you end in build nightmare.

This is something I have experimented in when I was working on Swagger-Codegen v3. Some stuff was in one repo, other in an other repo. This ends up by having chained PRs with the need to merge them in a specific order.
Sometimes a change in one Repo was a Breaking change, so when you push something in Core, the other repo (containing the templates) has a failure.

I am not saying that this we should never took the generators apart, I am just saying that this is not a trivial operation.

An other example of this separation of concerns is the template engine mentioned in #510. If we give the flexibility that each generator can decide what template engine he wants to use, then responsabilities for core and for languages libraries needs also to be really clear.

2/ Clear API
Relate to the point 1/, we need to have a clear API between the layers. This API needs to be stable and we should probably use strict "semantic versioning" in order to dectect when we are breaking stuff.

3/ Consuming the libs:

If we split the generators into different modules, then we need to be sure to know how we want to consume them.

For maven and gradle plugins this is not really a problem. You can add additional generators on the classpath and everything works fine.

For CLI usage, for the moment we build a big jar containing the core and all the generators. If you start to separate stuff, you need to know how you will bring the components back into "thing" that can be started.


Sorry for the big digression, back to the main topic.

I like the idea of working on “Compatibility Check” project. Here how I would see it:

a/ A separated repo for the moment. On the long term this might replace the integration test we have in the /samples folder.

b/ A folder containing small specifications that illustrate one feature. Example:

  • Enum value are not necessary String
  • enum in array of array
  • AllOf construct
  • OneOf construct

This would look like this:

/spec/Case001.yaml
/spec/Case002.json

Of course we can have some README that groups the cases by topic (model, security, ...)

c/ A list of generator we would like to have under test. Example:

  • java client with settings A, B
  • go client with settings X, Y, Z

We need to see how we identify them (technically and then in the reports)

Generator001
Generator002

Because I would like to work directly at core level (no CLI, no maven or gradle integration), I think this information needs to be stored as Java Code (maybe one class for each case. Each of them implement a common interfrace)

d/ For each combination of Case and Generator setting, it should be possible to manually write unit tests that ensure that the generated is doing what it should.
Those projects needs also to be stored somewhere.

e/ Then we have a Generate java program: it takes the 3 inputs (Case, Generator setting, and manually added code) and puts everything in a generated/ folder.
generated/Case001_Generator001/
generated/Case001_Generator002/
generated/Case002_Generator001/
generated/Case002_Generator002/

I could imagine that each of this folder contains:

  • the generated output
  • the manually written addition
  • some build file (a pom.xml or a gradle file in the java world)

For java as an example, I imagine:

generated/Case001_Generator001/generated/.... => the output of OpenAPI Generator
generated/Case001_Generator001/manual/.... => the additional tests cases, it defines a dependency to the generated code.
generated/Case001_Generator001/pom.xml => a standard build artefact that might also be generated.

But this setup could be different for each language.

f/ Then need to be able to run each of the generated example. For each run we would like to have collect this information:
=> Project identifier (eg Case001_Generator001)
=> Run identifier (which commit of the repo was built, were was the test executed, when was it executed …)
=> Did the project compile? YES/NO
=> Are all the test green? YES/NO

This information needs to be uploaded somewhere so that it can be collected by the next step

g/ Based on the information, different reports should be computed (like the table that was mentioned in the first message of this issue)

f/ In order to not have everything moving, I propose to only use released version of OpenAPI generator and to update as soon as a new release is published.

Please tell me what you think and If you are ready to work on this...

@grokify
Copy link
Member Author

grokify commented Jul 13, 2018

1/ Separation of Concerns

One approach may be to have a monorepo including a default generator, possibly Java since the core codebase is Java. This way if work was needed on a generator and the core at the same time, it could be done in the monorepo. Then other generators could live in their own repos but use established interfaces established in the monorepo. All the generators could be in the monorepo as well.

I don't know enough of the generator-specific communities to know if having a template engine up to the generator is necessarily a good idea at this point. The nice thing about using a commonly available engine like Mustache or Handlebars, is that developers can move between generators and the templates would be more similar, which is my gut feel at the moment.

2/ Clear API

Agree.

3/ Lib Consumption

A CLI could have a list of generators with versions that are known at compile time. If you wanted to update one, you can update the same version on your local system or increment the version locally. Not sure if this is the best solution and but it would allow you to build the same type of jar we have today for release and for development.

a/ Separate Repo

Agree this should start separately until it is reasonably mature.

Separately, I've been thinking of setting up a project that auto-builds multiple SDKs against a set of specs from a new engine. This way I can be more sure that a change has no regression issues, against a set of real world and test specs.

b/ Separate specs per test

I've been thinking of this and think it may be reasonable to have a decentralized spec. This way it's easy to isolate the assets for a particular test and they can also be combined into a single client and server pair. Here is a real world example:

https://github.com/grokify/api-specs/blob/master/incontact/admin/v9.0/admin-api-docs

c/ Generator per test

I think each test should have a UID that is represented in the spec, e.g. the operationId of an endpoint, or a x-property. Each generator can then contain a list of supported test UIDs.

d/ Manual test code repos

Agree that manually written tests need to have a repo home. Ideally, all the assets for a generator test can be contained in the same repo, including information in c/.

e/ Test Approach

This approach seems to be to generate a new client / server for each test case. What is the advantage of this over running one client or server with all the tests?

f/ Test Result Aggregation

Agree test result information needs to be aggregated for reference and display.

g/ Reports

Agree. These reports will be the compatibility reports I discussed earlier.

h/ OpenAPI version

If this is tied to re-architecture of the code as hinted at in 1/ and discussed in #510, then tying this to only released versions adds some unknowns, including when the re-architecture would be complete.

i/ Availability

I think we need to get more clarity on the design and we need @jimschubert's involvement. Once we nail down the design, we need to know which parts need to be worked in which order and who has availability to do so. It seems before we have a final design, it may be desirable to do some exploratory proof-of-concepts as well.

That being said, while I continue to work on enhancing the Petstore spec for testing, I have started two efforts:

  1. Building a smoke test system that auto-builds multiple client SDKs using an OpenAPI Generator release and runs some tests them, likely starting with compiling SDKs and running a few simple API requests for sanity tests. This will cover several SDKs I maintain and allow me to more easily verify and update them as new generator changes occur.
  2. Building out a generic spec with UID identified tests, at this point using all tests in a single distributed spec.

@jmini
Copy link
Member

jmini commented Jul 13, 2018

I had really 2 ideas in mind.

The refactoring (1/, 2/ 3/) and the architecture. This is a sort of digression and more a related topic in comparison to what is really discussed here (the report and the table).

I like your idea to keep stuff as single repo. My thought were driven by the Swagger-Codegen-v3 experience, and you are right: we can separate stuff without separating build and repos yet. This is a very interesting point.


As for the table check, this is something that could be started now. It will take time to find the appropriate setting.

I have also a sort of test OAS3 (treepots.yaml) that I use to test some of the concept and some of the issue presented in this project.

I have proposed separated tests in my point b/, because some construct are not just unsupported, they produce code that can not compile. And then the complete project does not compile, meaning you will have a "not supported" result.

I stick to my example of "Enum value are not necessary String". Now it was reported that if you use number as value then the generate java code does not compile (#447).

So if we have one single big OAS3 input, the result might be really poor. At least the javac compiler does not do partial compiling of a project.

This is the main reason why it is that complicated to add something to our current "Petstore spec for testing". If you would add something like my test spec "Enum value are not necessary String", I guess that some of the project will fail in the CI and this is not desirable for the OpenAPI-Generator project.
Unlike you I maintain only java-based generator, because I am not that comfortable with other languages and I have no chance to fix the compile error there to make the build green again.

I think that this is the main difference between what you are describing and what samples/ in this project is doing. In the "compatibility" check project, we need to handle cases where the generated code is wrong.

I agree with you on one point: finding the correct granularity for input spec will be really complicated. To stick will my example "Enum value are not necessary String" I do not think that we want to create one test case for each types ("string", "integer", "number"...). On the other side the most smaller the spec are, the better the result will be.

I am really interested into developing the right framework with you. Because my small experiment in my personal repo is going nowhere. We need to join forces, and we need to focus only on the genrerator we are supporting (for example I can not consider me as a maintainer of spring based java server, because there I have no clue how to solve the error when they occurs).

@jimschubert
Copy link
Member

jimschubert commented Jul 14, 2018

There's a lot of different discussions in this thread, and potentially unrelated to the title/description of having a compatibility table. I had brought up this issue as an internal private discussion, so I'm glad to see it come up here to discuss it more with the wider community. I have a personal interest in this work, as I hope to improve upon my IntelliJ plugin to support a similar feature compatibility matrix (or search).

I created a prototype maybe a year ago which is written in Kotlin that addresses a number of potential concerns in this thread. I'm looking for time to put together a design based on the prototype, but I haven't had much time lately.

Feature Compatibility

Metadata about generator support isn't that difficult to implement. In my prototype, I've implemented each as a flag (bitmask) where each feature is then a property of a metadata object. Here's the gist of what that entails to setup (missing some feature for OAS v3):

Click to expand Kotlin sample code
class BitMask(val value: Long)
interface Flags  {
    val bit: Long

    fun toBitMask(): BitMask = BitMask(bit)
}

// Flags extensions
infix fun Flags.and(other: Long): BitMask = BitMask(bit and other)
infix fun <T: Flags> Flags.or(other: T): BitMask = BitMask(bit or other.bit)
infix operator fun Flags.plus(other: Flags): BitMask = BitMask(bit or other.bit)
inline fun <reified T> enabledValues(mask: BitMask) : List<T> where T : Enum<T>, T : Flags {
    return enumValues<T>().filter {
        mask hasFlag it
    }
}

infix fun BitMask.or(other: Flags): BitMask = BitMask(value or other.bit)
infix operator fun BitMask.plus(other: BitMask): BitMask = BitMask(value or other.value)
infix operator fun BitMask.plus(other: Flags): BitMask = BitMask(value or other.bit)
infix fun <T: Flags> BitMask.hasFlag(which: T): Boolean {
    // an Undefined flag is a special case.
    if(value == 0L || (value > 0L && which.bit == 0L)) return false

    return value and which.bit == which.bit
}
infix fun <T: Flags> BitMask.unset(which: T): BitMask = BitMask(value xor which.bit)

// Features
enum class ClientModificationFeature(override val bit: Long) : Flags {
    Undefined(0),
    BasePath(1 shl 0),
    Authorizations(1 shl 1),
    UserAgent(1 shl 2);
}
enum class DataTypeFeature(override val bit: Long) : Flags {
    Undefined(0),
    Int32(1 shl 0),
    Int64(1 shl 1),
    Float(1 shl 2),
    Double(1 shl 3),
    Decimal(1 shl 4),
    String(1 shl 5),
    Byte(1 shl 6),
    Binary(1 shl 7),
    Boolean(1 shl 8),
    Date(1 shl 9),
    DateTime(1 shl 10),
    Password(1 shl 11),
    File(1 shl 12),
    Array(1 shl 13),
    Maps(1 shl 14),
    CollectionFormat(1 shl 15),
    CollectionFormatMulti(1 shl 16),
    Enum(1 shl 17),
    ArrayOfEnum(1 shl 18),
    ArrayOfModel(1 shl 19),
    ArrayOfCollectionOfPrimitives(1 shl 20),
    ArrayOfCollectionOfModel(1 shl 21),
    ArrayOfCollectionOfEnum(1 shl 22),
    MapOfEnum(1 shl 23),
    MapOfModel(1 shl 24),
    MapOfCollectionOfPrimitives(1 shl 25),
    MapOfCollectionOfModel(1 shl 26),
    MapOfCollectionOfEnum(1 shl 27);
}
enum class DocumentationFeature(override val bit: Long) : Flags {
    Undefined(0),
    Readme(1 shl 0),
    Model(1 shl 1),
    Api(1 shl 2);
}
enum class ModelReuseFeature(override val bit: Long) : Flags {
    Undefined(0),
    Composition(1 shl 0),
    Inheritance(1 shl 1);
}
enum class ParameterFeature(override val bit: Long) : Flags {
    Undefined(0),
    Path(1 shl 0),
    Query(1 shl 1),
    Header(1 shl 2),
    Body(1 shl 3),
    FormUnencoded(1 shl 4),
    FormMultipart(1 shl 5);
}
enum class SecurityFeature(override val bit: Long) : Flags {
    Undefined(0),
    BasicAuth(1 shl 0),
    ApiKey(1 shl 1),
    OAuth2(1 shl 2);
}

Separation of concerns

Conceptually, we have the following concerns:

  • Generation workflow
  • Templating engine
  • Global Options
  • Framework generator
  • Language definition
  • Generator Options
  • Spec loading
  • Spec parsing
  • API definition ADT (e.g. CodegenModel)
  • Template object ADT (currently, a map)
  • potentially others I can't think of at the moment

Everything above is rolled into DefaultGenerator and DefaultCodegen (or derived class). There's very little reusability. Where we have reusability at the language level, it is done as an abstract base class. This is problematic because changes in the base class for a target framework (you may be asking why framework logic exists in a language base class…) inadvertently introduces a bug in other generators sharing the same base class.

Our current design makes a refactor a little more difficult because the entry point into the process is DefaultGenerator, which holds a reference to the generator (CodegenConfig/DefaultCodegen instance). We use a form of delegation from the generator to the "config" component at almost every step of the workflow. This is intimidating for new contributors because it's not only undocumented, but is hard to follow even while debugging.

Here is how my prototype is basically separated:

image

I threw that diagram together pretty quickly, so I'll comment on just a couple of things.

  1. Users can load any parser implementation as long as it fits the parser interface from Core.
  2. The workflow and templating engine are components of the framework generator (opposite of what we have today). This allows generators to use the "default workflow" or define their own.
  3. Specs are loaded into a single data type, with all reconstruction done as a spec transform. My prototype is meant to target more than just OpenAPI specifications so the transforms are loaded from classpath. A transform is expected to make the API definition (equivalent to CodegenModel) as close to perfect -- as far as feature set -- as possible.
  4. Templating engine isn't one-for-all. Different frameworks can define their own default templating engine, and a user can override the templating engine via command line options (for example if the built in framework uses ELB but user wants to use Handlebars, they can).
  5. The API definition is not tightly coupled to OpenAPI as ours is now. It's an abstraction, so there is a transform from spec to API definition. This allows us to better unit test and compare between specification formats via automation rather than generated integration tests. This also allows us to more easily create another generation option of additional specs. After all, the generator project is just ETL from spec to some set of files… why not be a different spec format?

Plugin architecture

In my prototype, I've also worked out loading external plugins on demand. The problem is a little more involved than is listed in the above discussion.

First, we need a clearly defined interface for the plugins. Then, we also need to separate the loading of plugins such that two conflicting versions aren't on the same classpath. My prototype does this by implementing a Maven repository system that downloads plugins on demand and allows you to define the repository directory for a given run.

As an example, my prototype has a configuration system with the following format:

{
  "cli": {
    "mergeDefault": false,
    "repositories": [
      "https://dl.bintray.com/",
      "https://oss.sonatype.org/content/repositories/public/"
    ],
    "extensions": [
      "us.jimschubert:csharp:3.0"
    ]
  }
}

This defaults to loading plugins from maven local (~/.m2/repository). Suppose my plugin is us.jimschubert:csharp:3.0, but this version doesn't output C# 4.0 code as its support had been deprecated in the previous version and removed in the current version. As long as my core interfaces are the same, I can load the hypothetical previous version with C# 4.0 support from us.jimschubert:csharp:2.9 and define a different maven location:

{
  "cli": {
    "mergeDefault": false,
    "mavenLocal": "/src/generators/csharp-4.0-support",
    "repositories": [
      "https://my-internal-nexus/"
    ],
    "extensions": [
      "us.jimschubert:csharp:2.9"
    ]
  }
}

This would result in all required artifacts being cached under /src/generators/csharp-4.0-support, and pulling only from https://my-internal-nexus/ for this individual CLI run. I've only tested this a handful of times, but it seems especially useful for CI. I don't have environment variable interpolation or anything. (edit: also, the prototype doesn't support exclusions to allow advanced conflict resolution scenarios)

NOTE: This plugin architecture and ability to load differing versions via config is only really relevant when running from CLI. I don't know of a way to load two different versions from classpath in the same JVM instance, and if there was a way I wouldn't recommend it.

Testing

There's a lot of discussion about the samples being used as some form of integration testing… this doesn't make a whole lot of sense. All you really end up testing is whether or not the output compiles, not that the code it's compiling was generated correctly or that feature sets haven't accidentally been affected. There's not really a good way around this with executing all samples as if they're tests via CI.

By creating a good abstraction before template generation, we can easily write automation tests against that abstraction. That is, if "enum is array of array" is supposed to be supported by a generator, we should know what this looks like before it is applied to the template. There are some languages or frameworks where we have to apply gnarly hacks due to logicless mustache and the nature of the language or framework (see the Finch server generator). In these cases, it's simple enough to apply a very minimal spec to a generator, validate the interface before applying it to the templating engine, and then apply it only to a subset of a template. This might require that built-in templates follow a strict convention, but I think the benefits there greatly outweigh the complexity of an overly complicated CI pipeline.

Breaking changes

I know this wasn't mentioned in this issue, but it's very important with regards to the refactoring discussion.

Currently, we have way too many points of contention for introducing "breaking changes". This is only magnified by the number of generators in the core project. If we move toward generators in their own project/repository, this simplifies CI but could potentially cause issues with propagating breaking changes.

If we have a released set of interfaces for the generator, the object applied to templates, and other core components… changes to those interfaces would clearly indicate our breaking changes. I feel like this would simplify identification of breaking changes for the core team, but we would then need a way to notify plugin authors of those breaking changes.

jimschubert added a commit that referenced this issue May 4, 2019
GeneratorMetadata offers an immutable object created via Builder pattern
which allows generators to explicitly define their stability (stable,
beta, experimental, deprecated) as well as a message to be shown during
generation.

This is a step toward:

* Fleshing out the "Core" artifact (#845)
* Providing a place to encapsulate feature-oriented metadata (#840)
* Providing a means to communicate end of life scheduling (#116)

This new structure, specifically the Stability property, allows us to
offer future enhancements such as allowing users to filter down to only
"Stable" generators via CLI, and eventually any compat table (see #503).
jimschubert added a commit that referenced this issue May 5, 2019
* [feat] Intro GeneratorMetadata (stability index)

GeneratorMetadata offers an immutable object created via Builder pattern
which allows generators to explicitly define their stability (stable,
beta, experimental, deprecated) as well as a message to be shown during
generation.

This is a step toward:

* Fleshing out the "Core" artifact (#845)
* Providing a place to encapsulate feature-oriented metadata (#840)
* Providing a means to communicate end of life scheduling (#116)

This new structure, specifically the Stability property, allows us to
offer future enhancements such as allowing users to filter down to only
"Stable" generators via CLI, and eventually any compat table (see #503).

* Mark deprecated generators as deprecated in-code

* Re-export docs/generators.md
@jmini
Copy link
Member

jmini commented Aug 20, 2019

Testing is discussed a lot in this thread.


I totally agree with @jimschubert's vision:

By creating a good abstraction before template generation, we can easily write automation tests against that abstraction...

IMO those are unit-tests. Very valuable to understand what is going on and if based on an OpenAPI input we prepare the correct input for the templating engine. We can catch a lot at this level.


Nevertheless, I also think that integration tests are also relevant.

If a client generator claims supporting "Authentication", then we need to generate some code and let this code run against a server where we verify that the authentication header was correctly sent by the client.
For this kind of tests, I have proposed this: #689 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants