Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Build] Refactor OPE code to adhere to stricter build warnings #443

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

VivekPanyam
Copy link
Collaborator

Summary:

Refactor code under source/neuropod/multiprocess to adhere to the stricter set of warnings introduced in #437

Test Plan:

CI

@codecov
Copy link

codecov bot commented Sep 22, 2020

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 88.23%. Comparing base (73f3a31) to head (63939bd).
Report is 76 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master     #443   +/-   ##
=======================================
  Coverage   88.23%   88.23%           
=======================================
  Files         101      102    +1     
  Lines        6356     6356           
=======================================
  Hits         5608     5608           
  Misses        748      748           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -76,17 +76,18 @@ pid_t start_worker_process(const std::string &control_queue_name, std::vector<st
// Setup the environment

// Null terminated char * array
char *env_arr[env.size() + 1];
std::vector<char *> env_arr(env.size() + 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in case of vector, all elements are initialized with 0 (nullptr) already because it calls (char*) init for every element, so next line is redundant (it was necessary for regular array though)

: TypedNeuropodTensor<std::string>(copy_and_strip_last_dim(dims)), write_buffer_(this->get_num_elements())
{
auto byte_block = stdx::make_unique<SHMNeuropodTensor<uint8_t>>(dims, std::move(block), data, block_id);

auto base_ptr = byte_block->get_raw_data_ptr();
auto max_len = dims[dims.size() - 1];
auto max_len = static_cast<size_t>(dims[dims.size() - 1]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this cast is not needed and as result you don't need to cast it then at line 229 again

@@ -27,16 +27,16 @@ namespace neuropod
// This is useful for serialization and to wrap and/or copy tensors between backends.
// For example, if you had a TorchNeuropodTensor and you wanted to get a tensor compatible
// with `allocator` without making a copy, you could use this function
std::shared_ptr<NeuropodTensor> wrap_existing_tensor(NeuropodTensorAllocator & allocator,
std::shared_ptr<NeuropodTensor> tensor)
static std::shared_ptr<NeuropodTensor> wrap_existing_tensor(NeuropodTensorAllocator & allocator,
Copy link
Contributor

@vkuzmin-uber vkuzmin-uber Sep 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting why static is necessary here, did compiler require it to be static?

Some thoughts:
In C++, anonymous namespace was recommended and "static" was even deprecated for vars and functions. It was undeprecated in C++11

Some discussion clarifies details https://stackoverflow.com/questions/154469/unnamed-anonymous-namespaces-vs-static-functions

Anyway, please share details on it

Copy link
Contributor

@vkuzmin-uber vkuzmin-uber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Please check I left comment and question.

Base automatically changed from backends_cc_warnings to master October 1, 2020 03:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants