-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to convert mxnet model to caffe? #4
Comments
Interested in the conversion process too... |
@gasgallo You can attempt ONNX to convert it, MxNet just supports upsampling operator now. |
@AaronFan1992 thanks for the help, actually I've already succeeded in converting to onnx, but for my project I need caffe models unfortunately.. |
Did you rebuild mxnet from source to support updampling operator? |
@Zheweiqiu No, I've manually edited |
@gasgallo Do you mean replace
with the updated version from github? Update: |
@Zheweiqiu I think master branch still doesn't support many ops (like Let me check if I saved my code (I didn't need the |
Append the following code in @mx_op.register("UpSampling")
def convert_upsample(node, **kwargs):
"""Map MXNet's UpSampling operator attributes to onnx's Upsample operator
and return the created node.
"""
name, input_nodes, attrs = get_inputs(node, kwargs)
sample_type = attrs.get('sample_type', 'nearest')
sample_type = 'linear' if sample_type == 'bilinear' else sample_type
scale = convert_string_to_list(attrs.get('scale'))
scaleh = scalew = float(scale[0])
if len(scale) > 1:
scaleh = float(scale[0])
scalew = float(scale[1])
scale = [1.0, 1.0, scaleh, scalew]
node = onnx.helper.make_node(
'Upsample',
input_nodes,
[name],
scales=scale,
mode=sample_type,
name=name
)
return [node] |
I've created a pr here |
@gasgallo Thanks! One step further after trying your code, "Upsampling" error was gone but new error occurred
And I am using here to do the export. |
@Zheweiqiu Yeah, unfortunately even I found two workarounds taht are valid ONLY in the case of using one single scale (1.0):
@mx_op.register("Crop")
def convert_crop(node, **kwargs):
"""Map MXNet's crop operator attributes to onnx's Crop operator
and return the created node.
"""
name, inputs, attrs = get_inputs(node, kwargs)
num_inputs = len(inputs)
y, x = list(parse_helper(attrs, "offset", [0, 0]))
h, w = list(parse_helper(attrs, "h_w", [0, 0]))
if name == "crop0":
border = [x, y, x + 40, y + 40]
elif name == "crop1":
border = [x, y, x + 80, y + 80]
crop_node = onnx.helper.make_node(
"Crop",
inputs=[inputs[0]],
outputs=[name],
border=border,
scale=[1, 1],
name=name
)
logging.warning(
"Using an experimental ONNX operator: Crop. " \
"Its definition can change.")
return [crop_node] I stress again that this works only and only if you will use one single scale equal to Cheers |
@gasgallo Model converted. Thank you very much! |
@Zheweiqiu you can share how to convert it from mxnet to onnx |
@gasgallo can you share your way convert retinaface mxnet to onnx ? |
@luan1412167 apache/mxnet#15892 and #4 may help you out. Post it if you encounter any problem and I'll try to answer it if I get a clue. |
Also interested in converting retinaface to caffe. Seems like this issue is mostly discussing mxnet to onnx conversion. Any idea how to do the mnxet to caffe converion? Many thanks. |
@WIll-Xu35 use this |
@gasgallo Thanks, I'll give it a try. |
@Zheweiqiu After handling Crop issue, I met a new problem as below: |
Guys, I solved it recently, for Retinaface, you can remove Crop layer directly, and using Deconvolution instead of Upsampling. retrain the model using mxnet, can be perfect convert to caffe. |
Try working with onnx version 1.3.0. Because BatchNormalization support was dropped after that. |
Hi I attached your function to the _op_translations.py but got these errors |
Thanks for your work, Could you share your convert code? I try to convert it use mmdnn, but not work, if you not convenient, Could you give me some direction for it. I don't know how to process crop and upsampling layer. thx again.
The text was updated successfully, but these errors were encountered: