Skip to content

C++进行生成的test_seg中,使用matting模型,会报错 #3883

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
xiefuwei390 opened this issue Jan 28, 2025 · 3 comments
Closed
1 task done

C++进行生成的test_seg中,使用matting模型,会报错 #3883

xiefuwei390 opened this issue Jan 28, 2025 · 3 comments
Assignees
Labels
question Further information is requested

Comments

@xiefuwei390
Copy link

问题确认 Search before asking

  • 我已经搜索过问题,但是没有找到解答。I have searched the question and found no related answer.

请提出你的问题 Please ask your question

日志如下:当我用cmake生成了windows的工程后,运行有如下错误**,请问如何解决?**
D:\wks\padlepadle\PaddleSeg-release-2.10\deploy\cpp\build-win\Release>.\test_seg.exe --model_dir=pp-matting-hrnet_w18-human_512 --img_path=human.png --devices=GPU --save_dir=output
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0128 20:42:54.466872 47784 test_seg.cc:74] Use GPU
e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m
e[1me[35m--- Running analysis [ir_graph_clean_pass]e[0m
e[1me[35m--- Running analysis [ir_analysis_pass]e[0m
e[32m--- Running IR pass [is_test_pass]e[0m
e[32m--- Running IR pass [simplify_with_basic_ops_pass]e[0m
e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0128 20:42:58.785944 47784 fuse_pass_base.cc:57] --- detected 305 subgraphs
e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m
I0128 20:42:59.515856 47784 fuse_pass_base.cc:57] --- detected 28 subgraphs
e[32m--- Running IR pass [embedding_eltwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v2]e[0m
e[32m--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass]e[0m
e[32m--- Running IR pass [matmul_scale_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v3]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass]e[0m
e[32m--- Running IR pass [fc_fuse_pass]e[0m
e[32m--- Running IR pass [fc_elementwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add2_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_fuse_pass]e[0m
I0128 20:43:01.226245 47784 fuse_pass_base.cc:57] --- detected 167 subgraphs
e[32m--- Running IR pass [transpose_flatten_concat_fuse_pass]e[0m
e[32m--- Running IR pass [runtime_context_cache_pass]e[0m
e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m
I0128 20:43:01.262872 47784 ir_params_sync_among_devices_pass.cc:100] Sync params from CPU to GPU
e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m
e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m
e[1me[35m--- Running analysis [memory_optimize_pass]e[0m
I0128 20:43:01.347045 47784 memory_optimize_pass.cc:216] Cluster name : relu_270.tmp_0 size: 14400
I0128 20:43:01.347728 47784 memory_optimize_pass.cc:216] Cluster name : relu_0.tmp_0 size: 256
I0128 20:43:01.347728 47784 memory_optimize_pass.cc:216] Cluster name : relu_251.tmp_0 size: 288
I0128 20:43:01.348111 47784 memory_optimize_pass.cc:216] Cluster name : pool2d_2.tmp_0 size: 14400
I0128 20:43:01.348189 47784 memory_optimize_pass.cc:216] Cluster name : relu_271.tmp_0 size: 2048
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_267.tmp_0 size: 576
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_274.tmp_0 size: 1024
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_211.tmp_0 size: 288
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_243.tmp_0 size: 144
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_235.tmp_0 size: 72
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : shape_0.tmp_0 size: 16
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : img size: 12
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : shape_16.tmp_0_slice_0 size: 8
e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m
I0128 20:43:01.677953 47784 analysis_predictor.cc:1035] ======= optimize end =======
I0128 20:43:01.678951 47784 naive_executor.cc:102] --- skip [feed], feed -> img
I0128 20:43:01.692044 47784 naive_executor.cc:102] --- skip [tmp_75], fetch -> fetch
W0128 20:43:01.693042 47784 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.3, Runtime API Version: 11.6
W0128 20:43:01.697043 47784 gpu_resources.cc:91] device: 0, cuDNN Version: 8.5.


C++ Traceback (most recent call last):

Not support stack backtrace yet.


Error Message Summary:

InvalidArgumentError: The type of data we are trying to retrieve does not match the type of data currently contained in the container. (at ..\paddle\phi\core\dense_tensor.cc:148)

@xiefuwei390 xiefuwei390 added the question Further information is requested label Jan 28, 2025
@xiefuwei390
Copy link
Author

以上我修改了代码中的输出为float的向量后, std::vector out_data(out_num) 没有报错了,但是我使用以上的模型做抠图,输出的图形不对的?是我使用得不对吗? 我换成ppmattingv2-stdc1-human_512 这个做模型,也依然是不对,

@cuicheng01
Copy link
Collaborator

paddle的版本用的是哪个呢?方便的话降低到2.5试试呢?

@TingquanGao
Copy link
Collaborator

The issue has no response for a long time and will be closed. You can reopen or new another issue if are still confused.


From Bot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants