From 97d1a832a03745c08ecbe204199bca8617c97ded Mon Sep 17 00:00:00 2001 From: Ikko Eltociear Ashimine Date: Sun, 5 May 2024 23:21:20 +0900 Subject: [PATCH] docs: update README.md correspondance -> correspondence --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 7ad1fc9..a1a47a9 100644 --- a/README.md +++ b/README.md @@ -184,7 +184,7 @@ We replace the vision backbone and keep the same LLM and training recipe as in L Probing 3D Awareness: we use the code from [Probing the 3D Awareness of Visual Foundation Models](https://github.com/mbanani/probe3d) and evaluate our RADIO model and its teachers on monocular depth, -surface normals and multi-view correspondance tasks, using the +surface normals and multi-view correspondence tasks, using the NAVI dataset. For each task we report the accuracy, averaged over all thresholds. RADIO preserves features of DINOv2 and performs much better than CLIP analogs.