-
-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[gradle-plugin] Initial implementation #162
Conversation
I am not a gradle user myself yet (we may like to switch sometime), but I will try to give your plugin a try. From a code point of view I do not understand why you have separated this into a
In the future, we might move our complete build system to gradle, then the split |
Will give this a try in our project the next days! |
@jmini modules is a concept that all projects listed as modules are built by a single build system. This isn't buildable by Maven as far as I know. Moving it isn't out of the question, I just wanted there to be clear separation between the two build steps as I gathered feedback. If anyone knows how to trigger a gradle build from Maven, please let me know. I've only done a cursory search, but found nothing too promising. Specifically, an issue I've found is that we wouldn't have a way to refer to the artifacts built by Maven. I also didn't find a way to refer to the current version (3.0.0-SNAPSHOT) in a shared way. |
I guess one way is to fall back to the
or something similar. I have not tested it completely, but I think that you could create a pom.xml with packaging An other idea is to move the complete build of our 4 modules to gradle... but we will need to ensure that building the maven plugin still works. |
@jmini another option I considered is to move the gradle plugin to another repo. I don't see a reason to roll major/minor/revision versions of the generator for changes to the plugin or vice versa. |
I had the discussion with @wing328 about moving the maven-plugin to an other repository. Separate repo make sense if you have different release cycle. At
In my opinion the project has not this kind of maturity yet. We have some of this in place, but we are not at the state of the art level. If you use the policy of having only released version (no snapshot) as dependency in the dependent modules, this means doing more frequent releases of the @jimschubert I know that you have started to work on this, you have shared some architecture principle in the past. Having everything in one repo make also sense and bring some benefit.
In our project there is no real logic in the "cli", "online-generator", "maven plugin" or "gradle plugin". Each of this dependent projects just contains some glue to call the "core" engine. Take the With multiple repo, the difficulty is to keep everything in sync. You start to have people looking only at core + maven plugin or at core + cli. And it is much more difficult to see in one PR (and one CI job) that a change in core breaks something in "maven-plugin" for example. Why just splitting gradle plugin? If you follow this idea, you can split "core", "maven-plugin", "gradle-plugin" and "cli"... Splitting repo require more release engineering resources. I am not convinced that a project of this size needs it. |
I don't feel strongly one way or the other about it. My concern is with the build times and level of complexity of builds as we add more tools to the portfolio. My current machine builds everything with tests in about 20 seconds. My old machine, however, took nearly 10 minutes per build. I'm sure that not all contributors have high end machines, so we need to be aware of this. As some examples of my concerns, can the usage of the gradle exec plugin cause Maven repository conflicts if Maven were configured for parallel builds? Is build flakiness worth having everything in a single repository? How do we verify that the whole process of
occurs in sequence, and that Gradle plugin artifacts don't compile against old/cached and potentially buggy snapshot versions? This scenario is complicated in a multi-build system repository. This problem isn't new to the introduction of the Gradle plugin, either. We have some rudimentary integration testing in place, which run as part of the test phase. However, for these tests to run correctly, we have in the past had to install locally without tests, then run mvn install with tests. Without following this process, the integrations tests would use a previously cached SNAPSHOT artifact rather than that of the current build. But, like I said, I don't feel strongly one way or the other. I just think the discussion needs to be pragmatic and represent the effect on the community. You've referenced Google and OpenJDK as examples of single-repo best practices. Google's MonoRepo is all internal code, and I've read that it takes hours to build and build failures result in immediate revert which sometimes leads to individual small changes taking a day or more to become releases. This works for them because of their amount of testing and integration testing. But, there's a reason why their open source code isn't under a single repository, and I would argue that it would be a barrier of entry to contributors. As a personal example... last year, I contributed support for 128-bit trace ids to Twitter's Finagle (a mono repo for the finagle stack). Even with all code in a single repository, and knowing the source of individual artifacts pretty well, the sheer amount of code in the repo was overwhelming. My code was accepted and integrated. But having a mono repo didn't solve the problem that I had missed TraceId propagation to downstream http clients, and it didn't solve the problem that multiple people at Twitter approved the PR and missed this as well. The example of the Anyway, my comment was just a suggestion of an alternate approach, and not a comment about a personal preference. |
I'd be happy to discuss code organization more in depth. If you'd like, can we open another ticket for the discussion, and focus discussions in this PR to the gradle plugin implementation? |
When a user sets the models, apis, or supportingFiles environment variables, any one of these being set disables generation for the other two. This could be confusing to users, so I've added some clarification text in the comments for these properties. In addition, I've cleaned up the extension on Property.ifNotEmpty, to avoid using Suppress annotations where it's not necessary. The change creates a local variable of type T?, allowing Kotlin to track the variable's nullable state at compile time.
I've made a few small changes in 3621a5d. It's somewhat confusing how the generator handles system properties defining a csv of supporting files, apis, and models to generate. Defining any one of these completely disables generation of the other two. The comments should help. I'll work on adding task options to the doc next, followed by maven integration. |
I have hooked this plugin into I'll need to read the docs for publishing and see if there's something that would need to be updated here, but I think this is ready and can make it to the 3.0.0 release. There is a manual testing project under |
hi @jimschubert, How do you configure tasks for multiple generations ?
It then created tasks |
swagger-codegen-cli doesn't have a Gradle plugin, so I'm not sure what the DSL of that plugin would have done. It's odd to me that the extension is called openapi-generator's gradle plugin offers a declarative DSL via extensions (these are Gradle project extensions). The extensions are documented in the plugin's README.md, and map almost fully 1:1 with the options you'd pass to the CLI or Maven plugin. The plugin maps the extensions to a task of the same name to provide a clean API. If you're interested in the extension/task mapping concept from a high-level, you can check out Gradle's docs. If you want to perform multiple tasks, you'd want to create a task that inherits from the To match the example you are using for that old swagger based plugin: task buildSpec1(type: org.openapitools.generator.gradle.plugin.tasks.GenerateTask){
generatorName = "spring"
inputSpec = file(pathToMySpec1).toString()
additionalProperties = [
packageName: "petstore"
]
outputDir = file(Spec1GenerationTargetPath).toString()
configOptions = [
dateLibrary: "threetenp"
]
}
task buildSpec2(type: org.openapitools.generator.gradle.plugin.tasks.GenerateTask){
generatorName = "spring"
inputSpec = file(pathToMySpec2).toString()
additionalProperties = [
models: "true",
modelDocs: "true",
apis: "false",
apiDocs: "false",
apiTests: "false"
]
outputDir = file(Spec1GenerationTargetPath).toString()
} To execute your specs, you'd then do If you want to simplify the execution, you could create a task with task codegen(dependsOn: ['buildSpec1', 'buildSpec2']) Or, if you're generating the code on compile, maybe this: compileJava.dependsOn buildSpec1, buildSpec2 If you have a single spec, and you want to add it as a dependency to compileJava, you need to gain a reference to the task. One way to do this is: compileJava.dependsOn tasks.openApiGenerate |
Hi, Thank you for the thorough explanation. |
=> See #847 |
PR checklist
./bin/
to update Petstore sample so that CIs can verify the change. (For instance, only need to run./bin/{LANG}-petstore.sh
and./bin/security/{LANG}-petstore.sh
if updating the {LANG} (e.g. php, ruby, python, etc) code generator or {LANG} client's mustache templates). Windows batch files can be found in.\bin\windows\
.master
.Description of the PR
See #141
This is an initial implementation of a gradle-plugin for OpenAPI Generator. I've opened this PR to solicit feedback. Please see the initial commit of README.adoc for details and examples.
To test this out, you'll first need to build the generator:
Then, you'll want to build and publish the gradle plugin locally (update
inputSpec
references to your own locally available specs):From here, you can create a simple
build.gradle
and execute tasks:Details around how to configure this to build and published via CI will be worked out after getting feedback on the plugin.