Skip to content

Commit b5b82e2

Browse files
committed
Refer to "self-contained" rather than "standalone" apps to avoid confusion with standalone deployment mode. And fix placement of reference to this in MLlib docs.
1 parent 92e017f commit b5b82e2

File tree

5 files changed

+37
-36
lines changed

5 files changed

+37
-36
lines changed

docs/mllib-clustering.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ println("Within Set Sum of Squared Errors = " + WSSSE)
6969
All of MLlib's methods use Java-friendly types, so you can import and call them there the same
7070
way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the
7171
Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a Scala one by
72-
calling `.rdd()` on your `JavaRDD` object. A standalone application example
72+
calling `.rdd()` on your `JavaRDD` object. A self-contained application example
7373
that is equivalent to the provided example in Scala is given below:
7474

7575
{% highlight java %}
@@ -113,12 +113,6 @@ public class KMeansExample {
113113
}
114114
}
115115
{% endhighlight %}
116-
117-
In order to run the above standalone application, follow the instructions
118-
provided in the [Standalone
119-
Applications](quick-start.html#standalone-applications) section of the Spark
120-
quick-start guide. Be sure to also include *spark-mllib* to your build file as
121-
a dependency.
122116
</div>
123117

124118
<div data-lang="python" markdown="1">
@@ -153,3 +147,9 @@ print("Within Set Sum of Squared Error = " + str(WSSSE))
153147
</div>
154148

155149
</div>
150+
151+
In order to run the above application, follow the instructions
152+
provided in the [Self-Contained Applications](quick-start.html#self-contained-applications)
153+
section of the Spark
154+
Quick Start guide. Be sure to also include *spark-mllib* to your build file as
155+
a dependency.

docs/mllib-collaborative-filtering.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ val model = ALS.trainImplicit(ratings, rank, numIterations, alpha)
110110
All of MLlib's methods use Java-friendly types, so you can import and call them there the same
111111
way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the
112112
Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a Scala one by
113-
calling `.rdd()` on your `JavaRDD` object. A standalone application example
113+
calling `.rdd()` on your `JavaRDD` object. A self-contained application example
114114
that is equivalent to the provided example in Scala is given bellow:
115115

116116
{% highlight java %}
@@ -184,12 +184,6 @@ public class CollaborativeFiltering {
184184
}
185185
}
186186
{% endhighlight %}
187-
188-
In order to run the above standalone application, follow the instructions
189-
provided in the [Standalone
190-
Applications](quick-start.html#standalone-applications) section of the Spark
191-
quick-start guide. Be sure to also include *spark-mllib* to your build file as
192-
a dependency.
193187
</div>
194188

195189
<div data-lang="python" markdown="1">
@@ -229,6 +223,12 @@ model = ALS.trainImplicit(ratings, rank, numIterations, alpha = 0.01)
229223

230224
</div>
231225

226+
In order to run the above application, follow the instructions
227+
provided in the [Self-Contained Applications](quick-start.html#self-contained-applications)
228+
section of the Spark
229+
Quick Start guide. Be sure to also include *spark-mllib* to your build file as
230+
a dependency.
231+
232232
## Tutorial
233233

234234
The [training exercises](https://databricks-training.s3.amazonaws.com/index.html) from the Spark Summit 2014 include a hands-on tutorial for

docs/mllib-dimensionality-reduction.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -121,9 +121,9 @@ public class SVD {
121121
The same code applies to `IndexedRowMatrix` if `U` is defined as an
122122
`IndexedRowMatrix`.
123123

124-
In order to run the above standalone application, follow the instructions
125-
provided in the [Standalone
126-
Applications](quick-start.html#standalone-applications) section of the Spark
124+
In order to run the above application, follow the instructions
125+
provided in the [Self-Contained
126+
Applications](quick-start.html#self-contained-applications) section of the Spark
127127
quick-start guide. Be sure to also include *spark-mllib* to your build file as
128128
a dependency.
129129

@@ -200,10 +200,11 @@ public class PCA {
200200
}
201201
{% endhighlight %}
202202

203-
In order to run the above standalone application, follow the instructions
204-
provided in the [Standalone
205-
Applications](quick-start.html#standalone-applications) section of the Spark
206-
quick-start guide. Be sure to also include *spark-mllib* to your build file as
207-
a dependency.
208203
</div>
209204
</div>
205+
206+
In order to run the above application, follow the instructions
207+
provided in the [Self-Contained Applications](quick-start.html#self-contained-applications)
208+
section of the Spark
209+
quick-start guide. Be sure to also include *spark-mllib* to your build file as
210+
a dependency.

docs/mllib-linear-methods.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -247,7 +247,7 @@ val modelL1 = svmAlg.run(training)
247247
All of MLlib's methods use Java-friendly types, so you can import and call them there the same
248248
way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the
249249
Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a Scala one by
250-
calling `.rdd()` on your `JavaRDD` object. A standalone application example
250+
calling `.rdd()` on your `JavaRDD` object. A self-contained application example
251251
that is equivalent to the provided example in Scala is given bellow:
252252

253253
{% highlight java %}
@@ -323,9 +323,9 @@ svmAlg.optimizer()
323323
final SVMModel modelL1 = svmAlg.run(training.rdd());
324324
{% endhighlight %}
325325

326-
In order to run the above standalone application, follow the instructions
327-
provided in the [Standalone
328-
Applications](quick-start.html#standalone-applications) section of the Spark
326+
In order to run the above application, follow the instructions
327+
provided in the [Self-Contained
328+
Applications](quick-start.html#self-contained-applications) section of the Spark
329329
quick-start guide. Be sure to also include *spark-mllib* to your build file as
330330
a dependency.
331331
</div>
@@ -482,12 +482,6 @@ public class LinearRegression {
482482
}
483483
}
484484
{% endhighlight %}
485-
486-
In order to run the above standalone application, follow the instructions
487-
provided in the [Standalone
488-
Applications](quick-start.html#standalone-applications) section of the Spark
489-
quick-start guide. Be sure to also include *spark-mllib* to your build file as
490-
a dependency.
491485
</div>
492486

493487
<div data-lang="python" markdown="1">
@@ -519,6 +513,12 @@ print("Mean Squared Error = " + str(MSE))
519513
</div>
520514
</div>
521515

516+
In order to run the above application, follow the instructions
517+
provided in the [Self-Contained Applications](quick-start.html#self-contained-applications)
518+
section of the Spark
519+
quick-start guide. Be sure to also include *spark-mllib* to your build file as
520+
a dependency.
521+
522522
## Streaming linear regression
523523

524524
When data arrive in a streaming fashion, it is useful to fit regression models online,

docs/quick-start.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ title: Quick Start
88

99
This tutorial provides a quick introduction to using Spark. We will first introduce the API through Spark's
1010
interactive shell (in Python or Scala),
11-
then show how to write standalone applications in Java, Scala, and Python.
11+
then show how to write applications in Java, Scala, and Python.
1212
See the [programming guide](programming-guide.html) for a more complete reference.
1313

1414
To follow along with this guide, first download a packaged release of Spark from the
@@ -215,8 +215,8 @@ a cluster, as described in the [programming guide](programming-guide.html#initia
215215
</div>
216216
</div>
217217

218-
# Standalone Applications
219-
Now say we wanted to write a standalone application using the Spark API. We will walk through a
218+
# Self-Contained Applications
219+
Now say we wanted to write a self-contained application using the Spark API. We will walk through a
220220
simple application in both Scala (with SBT), Java (with Maven), and Python.
221221

222222
<div class="codetabs">
@@ -387,7 +387,7 @@ Lines with a: 46, Lines with b: 23
387387
</div>
388388
<div data-lang="python" markdown="1">
389389

390-
Now we will show how to write a standalone application using the Python API (PySpark).
390+
Now we will show how to write an application using the Python API (PySpark).
391391

392392
As an example, we'll create a simple Spark application, `SimpleApp.py`:
393393

0 commit comments

Comments
 (0)