You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
我已经搜索过问题,但是没有找到解答。I have searched the question and found no related answer.
请提出你的问题 Please ask your question
日志如下:当我用cmake生成了windows的工程后,运行有如下错误**,请问如何解决?**
D:\wks\padlepadle\PaddleSeg-release-2.10\deploy\cpp\build-win\Release>.\test_seg.exe --model_dir=pp-matting-hrnet_w18-human_512 --img_path=human.png --devices=GPU --save_dir=output
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0128 20:42:54.466872 47784 test_seg.cc:74] Use GPU
e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m
e[1me[35m--- Running analysis [ir_graph_clean_pass]e[0m
e[1me[35m--- Running analysis [ir_analysis_pass]e[0m
e[32m--- Running IR pass [is_test_pass]e[0m
e[32m--- Running IR pass [simplify_with_basic_ops_pass]e[0m
e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0128 20:42:58.785944 47784 fuse_pass_base.cc:57] --- detected 305 subgraphs
e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m
I0128 20:42:59.515856 47784 fuse_pass_base.cc:57] --- detected 28 subgraphs
e[32m--- Running IR pass [embedding_eltwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v2]e[0m
e[32m--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass]e[0m
e[32m--- Running IR pass [matmul_scale_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v3]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass]e[0m
e[32m--- Running IR pass [fc_fuse_pass]e[0m
e[32m--- Running IR pass [fc_elementwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add2_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_fuse_pass]e[0m
I0128 20:43:01.226245 47784 fuse_pass_base.cc:57] --- detected 167 subgraphs
e[32m--- Running IR pass [transpose_flatten_concat_fuse_pass]e[0m
e[32m--- Running IR pass [runtime_context_cache_pass]e[0m
e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m
I0128 20:43:01.262872 47784 ir_params_sync_among_devices_pass.cc:100] Sync params from CPU to GPU
e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m
e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m
e[1me[35m--- Running analysis [memory_optimize_pass]e[0m
I0128 20:43:01.347045 47784 memory_optimize_pass.cc:216] Cluster name : relu_270.tmp_0 size: 14400
I0128 20:43:01.347728 47784 memory_optimize_pass.cc:216] Cluster name : relu_0.tmp_0 size: 256
I0128 20:43:01.347728 47784 memory_optimize_pass.cc:216] Cluster name : relu_251.tmp_0 size: 288
I0128 20:43:01.348111 47784 memory_optimize_pass.cc:216] Cluster name : pool2d_2.tmp_0 size: 14400
I0128 20:43:01.348189 47784 memory_optimize_pass.cc:216] Cluster name : relu_271.tmp_0 size: 2048
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_267.tmp_0 size: 576
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_274.tmp_0 size: 1024
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_211.tmp_0 size: 288
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_243.tmp_0 size: 144
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_235.tmp_0 size: 72
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : shape_0.tmp_0 size: 16
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : img size: 12
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : shape_16.tmp_0_slice_0 size: 8
e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m
I0128 20:43:01.677953 47784 analysis_predictor.cc:1035] ======= optimize end =======
I0128 20:43:01.678951 47784 naive_executor.cc:102] --- skip [feed], feed -> img
I0128 20:43:01.692044 47784 naive_executor.cc:102] --- skip [tmp_75], fetch -> fetch
W0128 20:43:01.693042 47784 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.3, Runtime API Version: 11.6
W0128 20:43:01.697043 47784 gpu_resources.cc:91] device: 0, cuDNN Version: 8.5.
C++ Traceback (most recent call last):
Not support stack backtrace yet.
Error Message Summary:
InvalidArgumentError: The type of data we are trying to retrieve does not match the type of data currently contained in the container. (at ..\paddle\phi\core\dense_tensor.cc:148)
The text was updated successfully, but these errors were encountered:
问题确认 Search before asking
请提出你的问题 Please ask your question
日志如下:当我用cmake生成了windows的工程后,运行有如下错误**,请问如何解决?**
D:\wks\padlepadle\PaddleSeg-release-2.10\deploy\cpp\build-win\Release>.\test_seg.exe --model_dir=pp-matting-hrnet_w18-human_512 --img_path=human.png --devices=GPU --save_dir=output
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0128 20:42:54.466872 47784 test_seg.cc:74] Use GPU
e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m
e[1me[35m--- Running analysis [ir_graph_clean_pass]e[0m
e[1me[35m--- Running analysis [ir_analysis_pass]e[0m
e[32m--- Running IR pass [is_test_pass]e[0m
e[32m--- Running IR pass [simplify_with_basic_ops_pass]e[0m
e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0128 20:42:58.785944 47784 fuse_pass_base.cc:57] --- detected 305 subgraphs
e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m
I0128 20:42:59.515856 47784 fuse_pass_base.cc:57] --- detected 28 subgraphs
e[32m--- Running IR pass [embedding_eltwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v2]e[0m
e[32m--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass]e[0m
e[32m--- Running IR pass [matmul_scale_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v3]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass]e[0m
e[32m--- Running IR pass [fc_fuse_pass]e[0m
e[32m--- Running IR pass [fc_elementwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add2_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_fuse_pass]e[0m
I0128 20:43:01.226245 47784 fuse_pass_base.cc:57] --- detected 167 subgraphs
e[32m--- Running IR pass [transpose_flatten_concat_fuse_pass]e[0m
e[32m--- Running IR pass [runtime_context_cache_pass]e[0m
e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m
I0128 20:43:01.262872 47784 ir_params_sync_among_devices_pass.cc:100] Sync params from CPU to GPU
e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m
e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m
e[1me[35m--- Running analysis [memory_optimize_pass]e[0m
I0128 20:43:01.347045 47784 memory_optimize_pass.cc:216] Cluster name : relu_270.tmp_0 size: 14400
I0128 20:43:01.347728 47784 memory_optimize_pass.cc:216] Cluster name : relu_0.tmp_0 size: 256
I0128 20:43:01.347728 47784 memory_optimize_pass.cc:216] Cluster name : relu_251.tmp_0 size: 288
I0128 20:43:01.348111 47784 memory_optimize_pass.cc:216] Cluster name : pool2d_2.tmp_0 size: 14400
I0128 20:43:01.348189 47784 memory_optimize_pass.cc:216] Cluster name : relu_271.tmp_0 size: 2048
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_267.tmp_0 size: 576
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_274.tmp_0 size: 1024
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_211.tmp_0 size: 288
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_243.tmp_0 size: 144
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : relu_235.tmp_0 size: 72
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : shape_0.tmp_0 size: 16
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : img size: 12
I0128 20:43:01.348366 47784 memory_optimize_pass.cc:216] Cluster name : shape_16.tmp_0_slice_0 size: 8
e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m
I0128 20:43:01.677953 47784 analysis_predictor.cc:1035] ======= optimize end =======
I0128 20:43:01.678951 47784 naive_executor.cc:102] --- skip [feed], feed -> img
I0128 20:43:01.692044 47784 naive_executor.cc:102] --- skip [tmp_75], fetch -> fetch
W0128 20:43:01.693042 47784 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.3, Runtime API Version: 11.6
W0128 20:43:01.697043 47784 gpu_resources.cc:91] device: 0, cuDNN Version: 8.5.
C++ Traceback (most recent call last):
Not support stack backtrace yet.
Error Message Summary:
InvalidArgumentError: The type of data we are trying to retrieve does not match the type of data currently contained in the container. (at ..\paddle\phi\core\dense_tensor.cc:148)
The text was updated successfully, but these errors were encountered: