diff --git a/docs/building-spark.md b/docs/building-spark.md index 69d83023b228..67a2ce79dc81 100644 --- a/docs/building-spark.md +++ b/docs/building-spark.md @@ -111,7 +111,7 @@ should run continuous compilation (i.e. wait for changes). However, this has not extensively. A couple of gotchas to note: * it only scans the paths `src/main` and `src/test` (see -[docs](http://scala-tools.org/mvnsites/maven-scala-plugin/usage_cc.html)), so it will only work +[docs](http://davidb.github.io/scala-maven-plugin/example_cc.html)), so it will only work from within certain submodules that have that structure. * you'll typically need to run `mvn install` from the project root for compilation within diff --git a/docs/rdd-programming-guide.md b/docs/rdd-programming-guide.md index 26025984da64..29af159510e4 100644 --- a/docs/rdd-programming-guide.md +++ b/docs/rdd-programming-guide.md @@ -604,7 +604,7 @@ before the `reduce`, which would cause `lineLengths` to be saved in memory after Spark's API relies heavily on passing functions in the driver program to run on the cluster. There are two recommended ways to do this: -* [Anonymous function syntax](http://docs.scala-lang.org/tutorials/tour/anonymous-function-syntax.html), +* [Anonymous function syntax](http://docs.scala-lang.org/tour/basics.html#functions), which can be used for short pieces of code. * Static methods in a global singleton object. For example, you can define `object MyFunctions` and then pass `MyFunctions.func1`, as follows: