Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perform clang-tidy on both cpp and cuda source. #4034

Merged
merged 10 commits into from
Feb 5, 2019

Conversation

trivialfis
Copy link
Member

  • Add `GENERATE_COMPILATION_DATABASE' to CMake.
  • Rearrange CMakeLists.txt.
  • Add basic Python clang-tidy script.

Proposed in #4032 .

@hcho3 Could you please help automating this on Jenkins after sorting out other priorities? I added a simple documentation and a Python script for the new linting, but I don't know what's happening with Jenkins or Docker. Thanks! :)

Dependencies:

  • clang-tidy >= 7
  • cmake >= 3.8

@trivialfis
Copy link
Member Author

I removed modernize-use-auto from clang-tidy. The "use auto everywhere" is based on the assumption that we have a perfect IDE which can display correct hints about variable type. Such an assumption doesn't hold on CUDA programming. :) If others can argue the merit can overweight the pitfall then I will add it back.

@codecov-io
Copy link

Codecov Report

Merging #4034 into master will increase coverage by <.01%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff            @@
##           master   #4034      +/-   ##
=========================================
+ Coverage   60.79%   60.8%   +<.01%     
=========================================
  Files         130     130              
  Lines       11699   11699              
=========================================
+ Hits         7112    7113       +1     
+ Misses       4587    4586       -1
Impacted Files Coverage Δ
src/objective/multiclass_obj.cu 93.68% <0%> (+1.05%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f368d0d...dfae430. Read the comment docs.

@trivialfis
Copy link
Member Author

trivialfis commented Jan 4, 2019

Actually this might be doable just on Travis. Let me take a closer look. :)

@hcho3
Copy link
Collaborator

hcho3 commented Jan 4, 2019

@trivialfis In the longer term, I'd like to migrate some CPU tests to Jenkins, to allow for greater customization (such as higher memory limit).

@trivialfis
Copy link
Member Author

@hcho3 Actually somehow I oddly like the memory limitation. Those error keeps reminding me that I need to optimize it since all tests use relatively small datasets.

@hcho3
Copy link
Collaborator

hcho3 commented Jan 4, 2019

@trivialfis We still need to deal with intermittently failing Travis tests though.

@trivialfis
Copy link
Member Author

@trivialfis Definitely. I will spend some time on those flaky tests, preferably after next release. Sadly unlike you guys, I'm still trying to get my bachelor degree, time is quite limited. :(

@trivialfis
Copy link
Member Author

@hcho3 Nope. At the very least cuda headers are required. So it seems Jenkins is the right way to go. Could you please help drafting the Jenkins script after sorting out other priorities?

CMakeLists.txt Show resolved Hide resolved
@trivialfis
Copy link
Member Author

@hcho3 I have a branch for completely rewritten cmake: https://github.com/trivialfis/xgboost/tree/cmake

Got some difficulties making a PR, including:

dmlc/dmlc-core#495
dmlc/rabit#77

and after refining linking NCCL all reduce hangs...

@hcho3
Copy link
Collaborator

hcho3 commented Jan 30, 2019

@trivialfis Can you post a RFC in the issue tracker to explain the rationale for upgrading the CMake? I think many people will appreciate the benefits of Modern CMake. In the meanwhile, I'll be looking into clang-tidy, to unblock #4086.

@trivialfis
Copy link
Member Author

I can try to finish the first one. For second one (rabit) I might need some help for setting testing farm on Windows.
For the third, using group calls from NCCL instead of multi-threads can resolve the problem but with slight performance cost. I believe the bug came from NCCL instead of XGBoost.

explain the rationale for upgrading the CMake

I didn't upgrade CMake, it's still on 3.2 for building XGBoost. You only need higher version of CMake (3.5) for using GPU and (3.8) for linting.

@hcho3
Copy link
Collaborator

hcho3 commented Jan 30, 2019

I didn't upgrade CMake, it's still on 3.2 for building XGBoost.

I meant to say upgrading the style of the CMake script. Sorry for the confusion.

@trivialfis
Copy link
Member Author

Well, there are countless posts on the internet about Modern CMake like this one:
https://cliutils.gitlab.io/modern-cmake/
combined with its "other sources" section, explains everything thoroughly, I doubt I can do a better job. Wouldn't be a better choice to cite one of these posts/tutorials instead of making a formal RFC from ground up?

@hcho3
Copy link
Collaborator

hcho3 commented Jan 30, 2019

@trivialfis It doesn't have to be long. If your primary motivation is to follow Modern CMake conventions, then you can just cite some tutorials. Do you have other things you wanted to achieve with your re-organization, other than Modern CMake compliance?

@trivialfis
Copy link
Member Author

trivialfis commented Jan 30, 2019

other than Modern CMake compliance

One. Make the C APIs usable.

Update:
The new CMake scripts will add proper install targets so other CMake projects can find XGBoost. Other than this, new scripts also export the right complier/linker flags for linking XGBoost. Hence improves the C APIs.

@trivialfis
Copy link
Member Author

Well. Forgotten. Obsolete the makefiles.

@hcho3
Copy link
Collaborator

hcho3 commented Jan 30, 2019

@trivialfis Got it. Let's just create an issue titled "Roadmap: re-organize CMake build." (We won't call it RFC.) You can list the reasons for reorganizing the CMake build. For the first reason (Modern CMake), just cite a few links; for the other two (C API, Makefile deprecation), you should give a brief explanation. How does it sound?

@trivialfis
Copy link
Member Author

@hcho3 Cool, I will make that issue within these two days. :)

@hcho3
Copy link
Collaborator

hcho3 commented Jan 30, 2019

@trivialfis Yeah, thanks for the effort. Modern CMake seems really be a thing. I'd take it just for target-based logic.

@trivialfis
Copy link
Member Author

trivialfis commented Jan 30, 2019

@hcho3 Well, it comes with its cost. After CMake 3.8 CUDA is a formally supported language so these "modern cmake" can be applied to CUDA too. But many linux distributions don't have CMake 3.8 while the currently used CUDA wrapper is nothing modern. So for compatibility I got a "half modern" build system. It gave me a lots of headaches.

@hcho3
Copy link
Collaborator

hcho3 commented Jan 30, 2019

Okay, I managed to run the full clang-tidy for both C++ and CUDA sources. I've made a few minor changes to get the clang-tidy work on Jenkins. I'll upload the change soon.

@hcho3
Copy link
Collaborator

hcho3 commented Jan 30, 2019

Will upgrade the worker images in Jenkins soon, since we need CMake 3.8.

@hcho3
Copy link
Collaborator

hcho3 commented Jan 30, 2019

Clang-tidy will be added as one of the parallel test jobs.

screen shot 2019-01-29 at 10 40 26 pm

@hcho3 hcho3 changed the title Perform clang-tidy on both cpp and cuda source. [WIP] Perform clang-tidy on both cpp and cuda source. Jan 30, 2019
@trivialfis
Copy link
Member Author

Thanks

@trivialfis
Copy link
Member Author

@hcho3 Sorry but can we postpone the CMake refactoring? I think there are many other priorities currently.

@hcho3
Copy link
Collaborator

hcho3 commented Jan 31, 2019

@trivialfis Sure.

@hcho3 hcho3 changed the title [WIP] Perform clang-tidy on both cpp and cuda source. Perform clang-tidy on both cpp and cuda source. Feb 3, 2019
@hcho3
Copy link
Collaborator

hcho3 commented Feb 3, 2019

@trivialfis Clang-tidy error messages now should be visible. Let us merge this after tests pass.

@hcho3
Copy link
Collaborator

hcho3 commented Feb 3, 2019

@trivialfis
Copy link
Member Author

@hcho3 Could you lower down the CUDA version used for clang-tidy to CUDA-9? 10 is not supported by clang-tidy-7

@hcho3
Copy link
Collaborator

hcho3 commented Feb 5, 2019

@trivialfis Done. I created a Docker container with CUDA 9.2 and Clang-Tidy 7. Let me know if this is satisfactory.

@trivialfis
Copy link
Member Author

@hcho3 Thanks for lots of hard work. This is really helpful. I will try to correct the remaining errors in later PRs.

@trivialfis trivialfis merged commit 8905df4 into dmlc:master Feb 5, 2019
@hcho3
Copy link
Collaborator

hcho3 commented Feb 5, 2019

@trivialfis I'll go ahead and review #3825 within a day or two, so that you don't get blocked for later re-factoring. Thanks for your effort towards a consistent coding style for GPU code.

@trivialfis trivialfis mentioned this pull request Feb 7, 2019
@trivialfis trivialfis deleted the clang-tidy branch February 9, 2019 21:45
@trivialfis trivialfis restored the clang-tidy branch February 9, 2019 21:45
@trivialfis trivialfis deleted the clang-tidy branch February 9, 2019 21:45
@lock lock bot locked as resolved and limited conversation to collaborators May 10, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants