diff --git a/docs/index.md b/docs/index.md index 7e57eddb6da8..fea62865e216 100644 --- a/docs/index.md +++ b/docs/index.md @@ -34,9 +34,8 @@ source, visit [Building Spark](building-spark.html). Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS), and it should run on any platform that runs a supported version of Java. This should include JVMs on x86_64 and ARM64. It's easy to run locally on one machine --- all you need is to have `java` installed on your system `PATH`, or the `JAVA_HOME` environment variable pointing to a Java installation. -Spark runs on Java 17/21, Scala 2.13, Python 3.8+, and R 3.5+. -When using the Scala API, it is necessary for applications to use the same version of Scala that Spark was compiled for. -For example, when using Scala 2.13, use Spark compiled for 2.13, and compile code/applications for Scala 2.13 as well. +Spark runs on Java 17/21, Scala 2.13, Python 3.9+, and R 3.5+ (Deprecated). +When using the Scala API, it is necessary for applications to use the same version of Scala that Spark was compiled for. Since Spark 4.0.0, it's Scala 2.13. # Running the Examples and Shell @@ -110,7 +109,7 @@ options for deployment: * [Spark Streaming](streaming-programming-guide.html): processing data streams using DStreams (old API) * [MLlib](ml-guide.html): applying machine learning algorithms * [GraphX](graphx-programming-guide.html): processing graphs -* [SparkR](sparkr.html): processing data with Spark in R +* [SparkR (Deprecated)](sparkr.html): processing data with Spark in R * [PySpark](api/python/getting_started/index.html): processing data with Spark in Python * [Spark SQL CLI](sql-distributed-sql-engine-spark-sql-cli.html): processing data with SQL on the command line @@ -128,10 +127,13 @@ options for deployment: * [Cluster Overview](cluster-overview.html): overview of concepts and components when running on a cluster * [Submitting Applications](submitting-applications.html): packaging and deploying applications * Deployment modes: - * [Amazon EC2](https://github.com/amplab/spark-ec2): scripts that let you launch a cluster on EC2 in about 5 minutes * [Standalone Deploy Mode](spark-standalone.html): launch a standalone cluster quickly without a third-party cluster manager * [YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN) - * [Kubernetes](running-on-kubernetes.html): deploy Spark on top of Kubernetes + * [Kubernetes](running-on-kubernetes.html): deploy Spark apps on top of Kubernetes directly + * [Amazon EC2](https://github.com/amplab/spark-ec2): scripts that let you launch a cluster on EC2 in about 5 minutes +* [Spark Kubernetes Operator](https://github.com/apache/spark-kubernetes-operator): + * [SparkApp](https://github.com/apache/spark-kubernetes-operator/blob/main/examples/pyspark-pi.yaml): deploy Spark apps on top of Kubernetes via [operator patterns](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) + * [SparkCluster](https://github.com/apache/spark-kubernetes-operator/blob/main/examples/cluster-with-template.yaml): deploy Spark clusters on top of Kubernetes via [operator patterns](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) **Other Documents:**