Get started with Cloudflow Contrib

We assume you are comfortable with building and running Cloudflow’s application.


Have a running Cloudflow application containing Flink and/or Spark streamlets deployed natively.

Installing Cloudflow

To install Cloudflow follow the official guide. In addition you will need to turn off the "legacy way" of handling Flink and Spark runtimes in the cloudflow operator, you can do that by upgrading the Helm installation and adding two additional Java properties. e.g. -Dcloudflow.platform.flink-enabled=false -Dcloudflow.platform.spark-enabled=false.

A full example installation command will look as follows:

helm upgrade -i cloudflow cloudflow-helm-charts/cloudflow \
  --version "2.1.0" \
  --set cloudflow_operator.jvm.opts="-Dcloudflow.platform.flink-enabled=false -Dcloudflow.platform.spark-enabled=false -XX:MaxRAMPercentage=90.0 -XX:+UseContainerSupport" \
  --set kafkaClusters.default.bootstrapServers=cloudflow-strimzi-kafka-bootstrap.cloudflow:9092 \
  --namespace cloudflow

Storage requirements

In any Cloudflow application using Spark or Flink, the Kubernetes cluster will need to have a storage class of the ReadWriteMany type installed.

The NFS Server Provisioner is an excellent and easy to setup storage in the development environment, for production use the suggested and supported Cloud integrations for Flink and for Spark

For testing purposes, we suggest using the NFS Server Provisioner, which can be found here: NFS Server Provisioner Helm chart

We’ll install the nfs chart in the cloudflow namespace, if it does not exist yet, create the cloudflow namespace:

kubectl create ns cloudflow

Add the Stable Helm repository and update the local index:

helm repo add stable
helm repo update

Install the NFS Server Provisioner using the following command:

Depending on your Kubernetes configuration, you may want to adjust the values used during the install. Please see NFS Server Provisioner configuration options.
helm install nfs-server-provisioner stable/nfs-server-provisioner \
  --namespace cloudflow

The result of the installation is shown below, the NFS Server provisioner pod is running and the new storage class exists.

$ kubectl get pods -n cloudflow
NAME                       READY   STATUS    RESTARTS   AGE
nfs-server-provisioner-0   1/1     Running   0          25s

$ kubectl get sc
NAME                 PROVISIONER            AGE
nfs                  cloudflow-nfs          29s
standard (default)   2m57s

The documented NFS storage class is very portable and has been verified to work on GKE, EKS, AKS and Openshift.

If you whish to remove the NFS Server Provisioner run:

helm uninstall nfs-server-provisioner --namespace cloudflow

What’s next

The workflow to build and deploy your first Cloudflow application including Cloudflow Contrib’s components match the experience of using Cloudflow’s Akka streamlets with a few differences. Here we assume that you start from a correctly configured Cloudflow application and we describe the steps for using the integrations provided within this repository: