Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-1359] Adds a multiclass-MCC metric derived from Pearson #14461

Merged
merged 2 commits into from
Apr 10, 2019

Conversation

tlby
Copy link
Contributor

@tlby tlby commented Mar 18, 2019

Description

A multiclass metric equivalent to mxnet.metric.MCC can be derived from mxnet.metric.PearsonCorrelation with the addition of an .argmax() on preds. I'd like to document this use case of Pearson and provide it behind a metric named "PCC" to simplify extending examples from F1 and MCC to multiclass predictions.

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@tlby tlby requested a review from szha as a code owner March 18, 2019 21:14
@tlby
Copy link
Contributor Author

tlby commented Mar 19, 2019

hmm, actually numpy.corrcoef() may not be working how I need for the multiclass class. This work is based on the formulas here which relate Matthews to Pearson, but I may need to calculate the covariance myself.

And of course this PR should have some tests added. I'll work through both these issues.

@tlby tlby force-pushed the kMCC branch 2 times, most recently from 39580ff to 6f0a2f4 Compare March 19, 2019 20:31
@pinaraws
Copy link

pinaraws commented Mar 19, 2019

Thank you for your contribution @tlby!
Some tests are failing, please have a look.

@mxnet-label-bot add[pr-work-in-progress]

@tlby
Copy link
Contributor Author

tlby commented Mar 20, 2019

diff --git a/python/mxnet/metric.py b/python/mxnet/metric.py
index 2a33cf4d9..6de76cc64 100644
--- a/python/mxnet/metric.py
+++ b/python/mxnet/metric.py
@@ -1576,9 +1576,8 @@ class PCC(EvalMetric):
             n = max(pred.max(), label.max())
             if n >= self.k:
                 self._grow(n + 1 - self.k)
-            bcm = numpy.zeros((self.k, self.k))
-            for i, j in zip(pred, label):
-                bcm[i, j] += 1
+            ident = numpy.identity(self.k)
+            bcm = numpy.tensordot(ident[label], ident[pred].T, axes=(0,1))
             self.lcm += bcm
             self.gcm += bcm

seems more efficient for constructing the confusion matrix, but benchmarks worse. I'm new to NumPy though, anyone see a better approach?

@tlby
Copy link
Contributor Author

tlby commented Mar 21, 2019

I think this PR is ready now, sorry for posting the PR prematurely. I am happy with test coverage at this point, and happy with the metric scaling out to a large number of classes. The .update() method duration doubles when there are about 500 classes.

@pinaraws
Copy link

@mxnet-label-bot update[pr-awaiting-review]

@marcoabreu marcoabreu added pr-awaiting-review PR is waiting for code review and removed pr-work-in-progress PR is still work in progress labels Mar 21, 2019
@tlby
Copy link
Contributor Author

tlby commented Mar 22, 2019

diff --git a/python/mxnet/metric.py b/python/mxnet/metric.py
index 2a33cf4d9..7bc090a0a 100644
--- a/python/mxnet/metric.py
+++ b/python/mxnet/metric.py
@@ -1576,9 +1576,8 @@ class PCC(EvalMetric):
             n = max(pred.max(), label.max())
             if n >= self.k:
                 self._grow(n + 1 - self.k)
-            bcm = numpy.zeros((self.k, self.k))
-            for i, j in zip(pred, label):
-                bcm[i, j] += 1
+            k = self.k
+            bcm = numpy.bincount(label * k + pred, minlength=k*k).reshape((k, k))
             self.lcm += bcm
             self.gcm += bcm

is a bit faster when k is small, but scales to larger k poorly.

@tlby tlby force-pushed the kMCC branch 2 times, most recently from b514fee to 7fb48d2 Compare March 25, 2019 16:11
@tlby
Copy link
Contributor Author

tlby commented Mar 28, 2019

It turned out building the confusion matrix wasn't the hotspot in .update() anyway. Once I cleaned up the actual hotspots this code became comparable in speed to binary mcc.

@piyushghai
Copy link
Contributor

@szha Can you help with the review of this PR ?

@szha
Copy link
Member

szha commented Apr 10, 2019

@tlby thanks for the contribution. Great job.

@szha szha merged commit c6516cc into apache:master Apr 10, 2019
@tlby tlby deleted the kMCC branch April 10, 2019 18:23
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
…e#14461)

* Adds a multiclass-MCC metric derived from Pearson

* trigger ci
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants