Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Merged by Bors] - spark-k8s bundle for 23.4 #238

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,7 @@ crate-hashes.json
result
image.tar

tilt_options.json
tilt_options.json

**/bundle/
**/bundle.Dockerfile
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@ All notable changes to this project will be documented in this file.

## [Unreleased]

### Added

- Generate OLM bundle for Release 23.4.0 ([#238]).

### Changed

- `operator-rs` `0.38.0` -> `0.41.0` ([#235]).
Expand All @@ -12,6 +16,7 @@ All notable changes to this project will be documented in this file.

[#235]: https://github.com/stackabletech/spark-k8s-operator/pull/235
[#236]: https://github.com/stackabletech/spark-k8s-operator/pull/236
[#238]: https://github.com/stackabletech/spark-k8s-operator/pull/238

## [23.4.0] - 2023-04-17

Expand Down
16 changes: 16 additions & 0 deletions deploy/olm/23.4.0/manifests/configmap.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
apiVersion: v1
data:
properties.yaml: |
---
version: 0.1.0
spec:
units: []
properties: []
kind: ConfigMap
metadata:
name: spark-k8s-operator-configmap
labels:
app.kubernetes.io/name: spark-k8s-operator
app.kubernetes.io/instance: spark-k8s-operator
app.kubernetes.io/version: "23.4.0"
37 changes: 37 additions & 0 deletions deploy/olm/23.4.0/manifests/roles.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: spark-k8s-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- persistentvolumeclaims
- pods
- secrets
- serviceaccounts
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- events.k8s.io
resources:
- events
verbs:
- create
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
resourceNames:
- stackable-products-scc
verbs:
- use
2,909 changes: 2,909 additions & 0 deletions deploy/olm/23.4.0/manifests/sparkapplication.yaml

Large diffs are not rendered by default.

1,517 changes: 1,517 additions & 0 deletions deploy/olm/23.4.0/manifests/sparkhistoryserver.yaml

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,249 @@
---
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
name: spark-k8s-operator.v23.4.0
spec:
annotations:
support: stackable.tech
olm.properties: '[]'
categories: Storage
capabilities: Full Lifecycle
description: Stackable Operator for Apache Spark
repository: https://github.com/stackabletech/spark-k8s-operator
containerImage: docker.stackable.tech/stackable/spark-k8s-operator:23.4.0

displayName: Stackable Operator for Apache Spark
description: |-
This is a Kubernetes operator to manage [Apache spark](https://spark.apache.org/) ensembles. The Stackable Apache Spark Operator
adwk67 marked this conversation as resolved.
Show resolved Hide resolved
is part of the Stackable Data Platform, a curated selection of the best open source data apps like Kafka, Druid, Trino or Spark, all
working together seamlessly. Based on Kubernetes, it runs everywhere – on prem or in the cloud.

You can install the operator using [stackablectl or helm](https://docs.stackable.tech/spark-k8s/stable/getting_started/installation.html).
adwk67 marked this conversation as resolved.
Show resolved Hide resolved
See it in action in one of our [demos](https://stackable.tech/en/demos/) or follow this
[tutorial](https://docs.stackable.tech/spark-k8s/stable/getting_started/first_steps.html).

N.B. this operator requires the following Stackable internal operators to be installed as well:

- [Commons Operator](https://github.com/stackabletech/commons-operator)
- [Secret Operator](https://github.com/stackabletech/secret-operator)
keywords:
- spark-k8s
maintainers:
- email: [email protected]
name: Stackable GmbH
maturity: stable
provider:
name: Stackable GmbH
url: https://stackable.tech
version: 23.4.0
minKubeVersion: 1.23.0

installModes:
- supported: true
type: OwnNamespace
- supported: true
type: SingleNamespace
- supported: false
type: MultiNamespace
- supported: false
type: AllNamespaces

customresourcedefinitions:
owned:
# a list of CRDs that this operator owns
# name is the metadata.name of the CRD (which is of the form <plural>.<group>)
- name: sparkapplications.spark.stackable.tech
# version is the spec.versions[].name value defined in the CRD
version: v1alpha1
# kind is the CamelCased singular value defined in spec.names.kind of the CRD.
kind: SparkApplication
# human-friendly display name of the CRD for rendering in graphical consoles (optional)
displayName: Apache Spark Application
# a short description of the CRDs purpose for rendering in graphical consoles (optional)
description: Represents a Spark Application
- name: sparkhistoryservers.spark.stackable.tech
version: v1alpha1
kind: SparkHistoryServer
displayName: Apache Spark History Server
description: Represents a Spark History Server

relatedImages:
- name: spark-k8s-operator
image: docker.stackable.tech/stackable/spark-k8s-operator:23.4.0
install:
# strategy indicates what type of deployment artifacts are used
strategy: deployment
# spec for the deployment strategy is a list of deployment specs and required permissions - similar to a pod template used in a deployment
spec:
permissions:
- serviceAccountName: spark-k8s-operator
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- '*'
# permissions required at the cluster scope
clusterPermissions:
- serviceAccountName: spark-k8s-operator
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- persistentvolumeclaims
verbs:
- list
- apiGroups:
- ""
resources:
- pods
- configmaps
- secrets
- services
- endpoints
- serviceaccounts
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- get
- create
- delete
- list
- patch
- update
- watch
- apiGroups:
- batch
resources:
- jobs
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- apiGroups:
- events.k8s.io
resources:
- events
verbs:
- create
- apiGroups:
- spark.stackable.tech
resources:
- sparkapplications
- sparkhistoryservers
verbs:
- get
- list
- patch
- watch
- apiGroups:
- spark.stackable.tech
resources:
- sparkapplications/status
verbs:
- patch
- apiGroups:
- s3.stackable.tech
resources:
- s3connections
- s3buckets
verbs:
- get
- list
- watch
- apiGroups:
- authentication.stackable.tech
resources:
- authenticationclasses
verbs:
- get
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
verbs:
- bind
resourceNames:
- spark-k8s-clusterrole
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
resourceNames:
- hostmount-anyuid
verbs:
- use

deployments:
- name: spark-k8s-operator
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: spark-k8s-operator
app.kubernetes.io/instance: spark-k8s-operator
template:
metadata:
labels:
app.kubernetes.io/name: spark-k8s-operator
app.kubernetes.io/instance: spark-k8s-operator
spec:
serviceAccountName: spark-k8s-operator
securityContext: {}
containers:
- name: spark-k8s-operator
securityContext: {}
image: docker.stackable.tech/stackable/spark-k8s-operator:23.4.0
imagePullPolicy: IfNotPresent
resources: {}
volumeMounts:
- mountPath: /etc/stackable/spark-k8s-operator/config-spec
name: config-spec
volumes:
- name: config-spec
configMap:
name: spark-k8s-operator-configmap
10 changes: 10 additions & 0 deletions deploy/olm/23.4.0/metadata/dependencies.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
dependencies:
- type: olm.package
value:
packageName: commons-operator-package
version: "23.4.0"
- type: olm.package
value:
packageName: secret-operator-package
version: "23.4.0"
33 changes: 33 additions & 0 deletions deploy/olm/bundle.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
#!/usr/bin/env bash
# usage: bundle.sh <release>, called from base folder:
# e.g. ./deploy/olm/bundle.sh 23.1.0

set -euo pipefail
set -x

OPERATOR_NAME="spark-k8s-operator"

bundle-clean() {
rm -rf "deploy/olm/${VERSION}/bundle"
rm -rf "deploy/olm/${VERSION}/bundle.Dockerfile"
}


build-bundle() {
opm alpha bundle generate --directory manifests --package "${OPERATOR_NAME}-package" --output-dir bundle --channels stable --default stable
cp metadata/*.yaml bundle/metadata/
docker build -t "docker.stackable.tech/stackable/${OPERATOR_NAME}-bundle:${VERSION}" -f bundle.Dockerfile .
docker push "docker.stackable.tech/stackable/${OPERATOR_NAME}-bundle:${VERSION}"
opm alpha bundle validate --tag "docker.stackable.tech/stackable/${OPERATOR_NAME}-bundle:${VERSION}" --image-builder docker
}

main() {
VERSION="$1";

pushd "deploy/olm/${VERSION}"
bundle-clean
build-bundle
popd
}

main "$@"
1 change: 0 additions & 1 deletion tests/test-definition.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
#
# To run these tests on OpenShift you have to ensure that:
# 1. The "openshift" dimension below is set to "true"
# 2. At least one node in the cluster is labeled with "node: 1"
#
---
dimensions:
Expand Down