Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

你好,libtorch1.4好像用不了 #43

Open
bravestpeng opened this issue Apr 19, 2020 · 5 comments
Open

你好,libtorch1.4好像用不了 #43

bravestpeng opened this issue Apr 19, 2020 · 5 comments

Comments

@bravestpeng
Copy link

bravestpeng commented Apr 19, 2020

  1. 142行 conv_options.with_bias(with_bias);只有这个 conv_options.bias(with_bias),没有with_bias;
  2. at::TensorOptions options = torch::TensorOptions()
    .dtype(torch::kFloat32)
    .is_variable(true),没有这个is_variable,我改成了at::TensorOptions options = torch::TensorOptions().dtype(torch::kFloat32);去掉了后面的.is_variable(true);我改了之后,编译通过,GPU运行失败,出现错误的在yolo层,cpu上正常运行
@horsetif
Copy link

horsetif commented Apr 20, 2020

It is the version problem. Follow the https://pytorch.org/cppdocs/api/library_root.html. I change some code in Darknet.cpp.
line 133,

conv_options.bias(with_bias); // with_bias

line 143,

bn_options.track_running_stats(true);  //stateful

line 523,

at::TensorOptions options= torch::TensorOptions().dtype(torch::kFloat32);

because in /usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/TensorBody.h, it says:

C10_DEPRECATED_MESSAGE("Tensor.is_variable() is deprecated; everything is a variable now. (If you want to assert that variable has been appropriately handled already, use at::impl::variable_excluded_from_dispatch())")
bool is_variable() const noexcept {
return !at::impl::variable_excluded_from_dispatch();
}

Then we can run the example without error and have the same results.
Futher, change all BatchNormXXX to BatchNorm2dXXX, so we can run without warning :

BatchNormOptions -> BatchNorm2dOptions
BatchNorm -> BatchNorm2d
BatchNormImpl -> BatchNorm2dImpl

@bravestpeng
Copy link
Author

bravestpeng commented Apr 21, 2020

Thank you for your help,I can not use
{ torch::Tensor tensor0 = torch::ones({ 169, 2 ,3}).to(torch::kCUDA);
torch::Tensor tensor1 = torch::ones({ 169, 2 ,3}).to(torch::kCUDA);
torch::Tensor tensor2 = torch::cat({ tensor0 ,tensor1 }, 0);
},cuda is cuda::is_available in libtorch1.4 GPU
but I can use

{ torch::Tensor tensor0 = torch::ones({ 169, 2 ,3}).to(torch::kCPU);
torch::Tensor tensor1 = torch::ones({ 169, 2 ,3}).to(torch::kCPU);
torch::Tensor tensor2 = torch::cat({ tensor0 ,tensor1 }, 0);
}.
so torch::cat is different between GPU(cuda) an CPU,can you help me?

@horsetif
Copy link

horsetif commented Apr 22, 2020

My version of libtorch is 1.4.0, and use CUDA 10.0. I don't have this problem. Perhaps you can upload pytorch and CUDA.

@bravestpeng
Copy link
Author

谢谢,我觉得这个代码还有一个地方应该改一下,darknet.cpp,360行,
if (layer_type == "net")
continue;
应该改成:
if (layer_type == "net")
{
prev_filters = get_int_from_cfg(block, "channels", 0);
continue;
}

@mheriyanto
Copy link

Wow awesome. Thank you @horsetif .

jkschin added a commit to jkschin/libtorch-yolov3-deepsort that referenced this issue May 31, 2020
Couple of things to take note:
1. It's unclear what the original version was built against. I tried
1.0.0, 1.0.1, and a few other versions. The final decision was to try
and use the latest version and fix the API breaks.
2. walktree/libtorch-yolov3#43 was very
helpful in fixing API breaks. I mostly followed the instructions there,
like changing BatchNorm to BatchNorm2d.
3. In fixing API breaks, there was one part on stateful BatchNorm. That
parameter seems to have been removed. I did not look deeper into that.
4. https://github.com/walktree/libtorch-yolov3/pull/32/files was also
useful in fixing API breaks.
5. Don't forget that you have run from the main directory. In other
words, use ./build/bin/processing video.avi to run.
6. Don't forget to set CMAKE_PREFIX_PATH=/home/$USER/cpplibs/libtorch-1.5.0-gpu,
or the path where libtorch can be found.
7. cmake ../ to generate the Makefile.
8. make -j8 to speed up the make process.

Still figuring out the weights file - will be in next commit.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants