Skip to content

[SPARK-19746][ML] Faster indexing for logistic aggregator#17078

Closed
sethah wants to merge 4 commits intoapache:masterfrom
sethah:logistic_agg_indexing
Closed

[SPARK-19746][ML] Faster indexing for logistic aggregator#17078
sethah wants to merge 4 commits intoapache:masterfrom
sethah:logistic_agg_indexing

Conversation

@sethah
Copy link
Contributor

@sethah sethah commented Feb 27, 2017

What changes were proposed in this pull request?

JIRA: SPARK-19746

The following code is inefficient:

    val localCoefficients: Vector = bcCoefficients.value

    features.foreachActive { (index, value) =>
      val stdValue = value / localFeaturesStd(index)
      var j = 0
      while (j < numClasses) {
        margins(j) += localCoefficients(index * numClasses + j) * stdValue
        j += 1
      }
    }

localCoefficients(index * numClasses + j) calls Vector.apply which creates a new Breeze vector and indexes that. Even if it is not that slow to create the object, we will generate a lot of extra garbage that may result in longer GC pauses. This is a hot inner loop, so we should optimize wherever possible.

How was this patch tested?

I don't think there's a great way to test this patch. It's purely performance related, so unit tests should guarantee that we haven't made any unwanted changes. Empirically I observed between 10-40% speedups just running short local tests. I suspect the big differences will be seen when large data/coefficient sizes have to pause for GC more often. I welcome other ideas for testing.

@sethah
Copy link
Contributor Author

sethah commented Feb 27, 2017

ping @dbtsai @yanboliang

Copy link
Member

@dbtsai dbtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple comments.


val localFeaturesStd = bcFeaturesStd.value
val localCoefficients = bcCoefficients.value
val localCoefficients = bcCoefficients.value.toArray
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the first version of LOR, we have the following code which avoid this issue you pointed out.

  private val weightsArray = weights match {
     case dv: DenseVector => dv.values
     case _ =>
       throw new IllegalArgumentException(
         s"weights only supports dense vector but got type ${weights.getClass}.")
   }

I think order approach will be more efficient since toArray is only called once (you can add the case for sparse), and for sparse initial coefficients, we will not convert from sparse to dense again and again.

This can be a future work. With L1 applied, the coefficients can be very sparse, so we can compress the coefficients for each iteration, and have specialized implementation for UpdateInPlace.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above check got us into trouble because if we don't add @transient lazy val then we'll serialize the coefficients. The call to toArray is really just a small bit of pointer indirection, and while I agree it is not great to call it every time, the extra function call should pale in comparison to the O(numClasses * numFeatures) ops we do in the method.

That said, I'm ok with either solution, just wanted to point out the pros/cons of each. Let me know what you think, thanks for reviewing!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thought is that by tagging the class member as @transient lazy val we are at least making this hidden problem more explicit. I think using the transient method makes it a bit less likely that someone will come along and make a change in the future that serializes the coefficients. I'll plan to update it here with the transient tag then.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern is that if coefficients is sparse, we are not just doing the pointer indirection but creating a new dense array from sparse matrix. I know we always pass in a dense matrix so this will not be an issue now, but being said that, in the following code, if we call compress in the coefficients, we may be able to broadcast a smaller object when L1 is applied or in the initial iteration that most of the elements in coefficients are zero.

https://github.com/sethah/spark/blob/3bea389f6780e1fd0385fbe26954fa4f59b69e37/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala#L1674

@sethah
Copy link
Contributor Author

sethah commented Feb 27, 2017

Jenkins test this please.

@SparkQA
Copy link

SparkQA commented Feb 27, 2017

Test build #73503 has finished for PR 17078 at commit 61d22e7.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.


val localFeaturesStd = bcFeaturesStd.value
val localCoefficients = bcCoefficients.value
val localCoefficients = bcCoefficients.value.toArray
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern is that if coefficients is sparse, we are not just doing the pointer indirection but creating a new dense array from sparse matrix. I know we always pass in a dense matrix so this will not be an issue now, but being said that, in the following code, if we call compress in the coefficients, we may be able to broadcast a smaller object when L1 is applied or in the initial iteration that most of the elements in coefficients are zero.

https://github.com/sethah/spark/blob/3bea389f6780e1fd0385fbe26954fa4f59b69e37/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala#L1674

private var lossSum = 0.0

private val gradientSumArray = Array.fill[Double](coefficientSize)(0.0D)
@transient private lazy val coefficientsArray = bcCoefficients.value match {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you have the type of coefficientsArray so people can clear know that it's a primitive array?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'll update it.

val bcFeaturesStd = spark.sparkContext.broadcast(Array(1.0))
val binaryAgg = new LogisticAggregator(bcCoefficientsBinary, bcFeaturesStd, 2,
fitIntercept = true, multinomial = false)
val thrownBinary = withClue("binary logistic aggregator cannot handle sparse coefficients") {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should handle sparse coefficients for further performance improvement. But not in this PR.

@asfgit asfgit closed this in 16d8472 Feb 28, 2017
@dbtsai
Copy link
Member

dbtsai commented Feb 28, 2017

Thanks. Merged into master.

@SparkQA
Copy link

SparkQA commented Feb 28, 2017

Test build #73537 has finished for PR 17078 at commit 44ee113.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants