You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thank you for sharing the code with a nice documentation. It seems demo.m file is to reproduce results in the paper. I was wondering if you can write another demo just to quickly run the pretrained model for a given image and pairs to achieve 'person on grass' kind of labelling (or relation probabilities). The current documentation confused me a bit.
Thank you.
The text was updated successfully, but these errors were encountered:
Hi barisgecer,
Sorry for my late reply. I agree it would be nice to have such demo file, but it’s not straightforward to write as our model is not end-to-end, in the sense that you need to first extract object detections/appearance with caffe, and then run our matlab code to obtain the relation probabilities. I might try to do it later if I have more time... Meanwhile, to run on a given image and pairs:
extract the appearance features/object scores using our trained Fast-RCNN model (apply PCA+l2norm)
encode the spatial feature between the pair of boxes with compute_spatial_features.m
then apply our linear trained model on the concatenated visual feature to get the final predictions.
You can look into the code eval/predict.m to see how to combine the scores.
Hi,
Thank you for sharing the code with a nice documentation. It seems demo.m file is to reproduce results in the paper. I was wondering if you can write another demo just to quickly run the pretrained model for a given image and pairs to achieve 'person on grass' kind of labelling (or relation probabilities). The current documentation confused me a bit.
Thank you.
The text was updated successfully, but these errors were encountered: