简体ä¸ć–‡ | English
Go to the Model Repository to pull the model files. The models are located in the nndeploy/detect
directory.
Download the detect/yolo.json workflow file. After uploading the file through the front - end workflow upload function, you can view the workflow.
Parameter configuration mainly includes: input and output node file path configuration, parameter configuration of front - and back - nodes, and parameter configuration of inference nodes. Most of the parameters have been configured in the workflow file. Users only need to modify the following parameters:
path_
: Path of the input image
path_
: Path of the output image
param_/model_type_
: Type of the model file
param_/model_value_
: Path of the model file
param_/device_type_
: Device type
Go to the Model Repository to pull the model files. The models are located in the nndeploy/track
directory.
Download the track/track.json workflow file. After uploading the file through the front - end workflow upload function, you can view the workflow.
Parameter configuration mainly includes: input and output node file path configuration, parameter configuration of front - and back - nodes, and parameter configuration of inference nodes. Most of the parameters have been configured in the workflow files (yolov5n/s/m/l). Users only need to modify the following parameters:
path_
: Path of the input video
path_
: Path of the output video
param_/model_type_
: Type of the model file
param_/model_value_
: Path of the model file
param_/device_type_
: Device type
Go to the Model Repository to pull the model files. The models are located in the nndeploy/segment
directory.
Download the segment/rmbg.json workflow file. After uploading the file through the front - end workflow upload function, you can view the workflow.
Parameter configuration mainly includes: input and output node file path configuration, parameter configuration of front - and back - nodes, and parameter configuration of inference nodes. Most of the parameters have been configured in the workflow file. Users only need to modify the following parameters:
path_
: Path of the input image
path_
: Path of the output image
param_/model_type_
: Type of the model file
param_/model_value_
: Path of the model file
param_/device_type_
: Device type
Go to the Model Repository to pull the model files. The models are located in the nndeploy/stable_diffusion/
directory.
Download the stable_diffusion/stable_diffusion_1.5.json workflow file. After uploading the file through the front - end workflow upload function, you can view the workflow.
prompt_
: Positive prompt (token length limit is 77)
prompt_
: Negative prompt (token length limit is 77)
param_/json_blob_
: Path of the tokenizer.json
file
param_/json_blob_
: Path of the tokenizer.json
file
param_/model_type_
: Type of the model file
param_/model_value_
: Path of the CLIP model file
param_/device_type_
: Device type
param_/model_type_
: Type of the model file
param_/model_value_
: Path of the CLIP model file
param_/device_type_
: Device type
param_/model_type_
: Type of the model file
param_/model_value_
: Path of the UNet model file
param_/external_model_data_
: Path of the weight file in FP32 format. No need to set for FP16 format.
param_/device_type_
: Device type (For FP16, set the type to kDeviceTypeCodeCuda:0
)
param_/model_type_
: Type of the model file
param_/model_value_
: Path of the VAE decoder model file
param_/device_type_
: Device type
path_
: Path of the output image
Go to the Model Repository to pull the model files. The models are located in the nndeploy/qwen
directory.
Download the qwen/qwen-0.5B.json workflow file. After uploading the file through the front - end workflow upload function, you can view the workflow.
user_content_
: Prompt
config_path_
: Path of the configuration file
config_path_
: Path of the llm_config.json
configuration file
external_model_data_
: When the ONNX model file exceeds 2GB, the weights and model structure will be stored separately, such as llm.onnx
and llm.onnx.data
. This parameter is the path of the weight file.
external_model_data_
: When the ONNX model file exceeds 2GB, the weights and model structure will be stored separately, such as llm.onnx
and llm.onnx.data
. This parameter is the path of the weight file.
{ "hidden_size": 896, "layer_nums": 24, "max_seq_len": 250, "attention_mask": "float", "model_path": "/home/lds/model/onnx/llm.onnx", "embedding_file": "/home/lds/model/embeddings_bf16.bin", "tokenizer_json": "/home/lds/Qwen2-0.5B-Instruct/tokenizer.json", "tokenizer_txt": "/home/lds/model/tokenizer.txt", "key_value_shape": [ 2, 1, 0, 2, 64 ], "prompt_template": "<|im_start|>user/n%s<|im_end|>/n<|im_start|>assistant/n", "prompt": "Please introduce Li Bai?", "is_visual": false }
Modify the following parameters according to your model file paths:
`model_path`: Qwen model file path
`embedding_file`: Embedding model file path
`tokenizer_json`: Tokenizer file path
`tokenizer_txt`: Tokenizer file path
## Face Swap Algorithm
### Model Preparation
Go to the [Model Repository](https://modelscope.cn/models/nndeploy/nndeploy/summary) to pull model files. The models are located in the nndeploy/face_swap directory.
### Install Python Dependencies
```bash
cd path/nndeploy-workflow/face_swap
pip install -r requirements.txt
Download the face_swap/face_swap.json workflow file. Upload the file through the frontend workflow upload function to see the workflow.
Parameter configuration mainly includes: input and output node file path configuration, parameter configuration of preceding and following nodes, and inference node parameter configuration. The workflow file has already configured most parameters. Users only need to modify the following parameters:
path_
: Input image path
path_
: Output image path
model_path_
: Face swap model file
model_path_
: Face enhancement model file
Go to the Model Repository to pull model files. The models are located in the nndeploy/face_swap directory.
Download the creative/face_swap_seg.json workflow file. Upload the file through the frontend workflow upload function to see the workflow. creative/face_swap_seg.json
Go to the Model Repository to pull model files. The models are located in the nndeploy/face_swap directory.
Download the creative/face_swap_seg.json workflow file. Upload the file through the frontend workflow upload function to see the workflow. creative/face_swap_seg.json
Go to the Model Repository to pull model files. The models are located in the nndeploy/face_swap directory.
Download the creative/rmbg_yolo_resnet.json workflow file. Upload the file through the frontend workflow upload function to see the workflow.