Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@ object LinearRegressionWithElasticNetExample {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("LinearRegressionWithElasticNetExample")
val sc = new SparkContext(conf)
val sqlCtx = new SQLContext(sc)
val sqlContext = new SQLContext(sc)

// $example on$
// Load training data
val training = sqlCtx.read.format("libsvm")
val training = sqlContext.read.format("libsvm")
.load("data/mllib/sample_linear_regression_data.txt")

val lr = new LinearRegression()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,11 @@ object LogisticRegressionSummaryExample {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("LogisticRegressionSummaryExample")
val sc = new SparkContext(conf)
val sqlCtx = new SQLContext(sc)
import sqlCtx.implicits._
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this import still needed? (I forget.) It's weird to use a variable name in the import isn't it? (I know it was already like that.) Otherwise this does look like all instances of sqlCtx in examples, yes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's necessary, otherwise compiler will complains when use DataFrame methods.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this import is needed anymore. It used to be there to use the toDF() method back when there was no libsvm data source.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me try to fix it in this PR. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkbradley
69: val bestThreshold = fMeasure.where($"F-Measure" === maxFMeasure)
70: .select("threshold").head().getDouble(0)

Still needs the import. Otherwise, it won't be compiled.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, OK, I guess it's the dollar sign notation


// Load training data
val training = sqlCtx.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")
val training = sqlContext.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")

val lr = new LogisticRegression()
.setMaxIter(10)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@ object LogisticRegressionWithElasticNetExample {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("LogisticRegressionWithElasticNetExample")
val sc = new SparkContext(conf)
val sqlCtx = new SQLContext(sc)
val sqlContext = new SQLContext(sc)

// $example on$
// Load training data
val training = sqlCtx.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")
val training = sqlContext.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")

val lr = new LogisticRegression()
.setMaxIter(10)
Expand Down