Configuring Shopping Cart
The Shopping Cart example is already configured to run. In this section, we want to point your attention to the necessary pieces that you would need to provide for your own application. Preparation for production includes:
-
Creating a production configuration file for the Shopping Cart microservice
-
Defining a deployment spec for OpenShift
The production configuration of Lagom and the deployment spec are tightly coupled; many items in the two files correspond. Rather than document the two types of configuration separately, this guide interweaves them, making it easier to understand the relationships between them.
The sample app includes the complete configuration file and deployment spec in their final form. This guide includes just snippets to show you the most relevant configuration. In this guide, we’ll review the most details for the shopping-cart
microservice. The inventory
microservice is trivial, it doesn’t talk to a database, it doesn’t do any clustering, and everything it needs is a subset of what the shopping-cart
microservice needs. You can refer to the sample app for the inventory
microservice configuration.
1. View the configuration files
First, let’s look at the production configuration file for the Shopping Cart microservice and a barebones OpenShift deployment spec:
-
From the example’s
shopping-cart/src/main/resources
directory, open the file namedprod-application.conf
. -
Notice the following contents:
include "application" (1) play { server { pidfile.path = "/dev/null" (2) } }
1 This line includes Shopping Cart’s main application.conf
file. Any subsequent configuration will override the configuration fromapplication.conf
. This pattern allows us to keep our main, non-specific configuration inapplication.conf
, while keeping production-specific configuration separate.2 Setting play.server.pidfile.path = /dev/null
disables the use of apidFile
. ApidFile
is not necessary for a process running in a container. -
Open the
shopping-cart.yaml
from thedeploy
directory. -
Note the numbered items, which are explained below:
apiVersion: "apps/v1" kind: Deployment (1) metadata: name: shopping-cart spec: selector: matchLabels: app: shopping-cart (2) template: metadata: labels: app: shopping-cart spec: containers: - name: shopping-cart image: "shopping-cart:latest" (3) env: - name: JAVA_OPTS (4) value: "-Xms256m -Xmx256m -Dconfig.resource=prod-application.conf" resources: limits: memory: 512Mi (5) requests: cpu: 0.25 (6) memory: 512Mi --- apiVersion: v1 kind: Service metadata: name: shopping-cart spec: ports: - name: http port: 80 (7) targetPort: 9000 selector: app: shopping-cart type: LoadBalancer
1 This line defines a Kubernetes Deployment
, a logical grouping of pods that represent a single microservice using the same template. They support configurable rolling updates, meaning that the deployment will be gradually upgraded, rather than upgrading every pod at once and incurring an outage.2 We label the pod in the template
withapp: shopping-cart
. This must match the the deployment’smatchLabels
and also theselector
in the Kubernetes Service configuration, so that the deployment knows which pods it owns and should maintain and upgrade, and so the Kubernetes Service knows which pods to include in its Service resolution.3 The image we’re using is shopping-cart:latest
. This corresponds to the name and version of the microservice in our build. We discuss how to select an appropriate version number below.4 We use the JAVA_OPTS
environment variable to pass the configuration to tell Lagom to use theprod-application.conf
configuration file, rather than the defaultapplication.conf
.5 We’ve configured a maximum of 256mb of memory for the JVM heap size, while the container gets 512mb. The container gets more memory than the JVM heap because the JVM will consume other memory for: class file metadata, thread stacks, compiled code, and JVM specific libraries. 6 We’ve only requested minimal CPU for the container. This is suitable for a local deployment, but you should increase it for real production deployments. Note that we also haven’t set a CPU limit, this is because the Akka documentation recommends that JVMs do not set a CPU limit. 7 The Kubernetes Service exposes HTTP on port 80
, and directs it to port9000
on the pods. Port9000
is Lagom’s default HTTP port.
2. Add Play application secret
Play Framework requires a secret key which is used to sign its session cookie, a JSON Web Token. To do this, we’ll generate a secret, store it in the Kubernetes Secret API, and then update our configuration and spec to consume it.
-
Generate the secret:
oc create secret generic shopping-cart-application-secret --from-literal=secret="$(openssl rand -base64 48)"
-
Note that the
prod-application.conf
file contains the logic to consume the secret via an environment variable:play { http.secret.key = "${APPLICATION_SECRET}" }
-
Note that the
shopping-cart.yaml
file also uses the secret:- name: APPLICATION_SECRET valueFrom: secretKeyRef: name: shopping-cart-application-secret key: secret
3. View the configuration to connect to PostgreSQL
In Setting up PostgreSQL we described the requirements the sample app and this guide have for a PostgreSQL service, including the requirement for the service to be called postgresql
, and for there to be a secret called postgres-shopping-cart
. Now we need to check that Shopping Cart is configure to connect to that service and consume the secret.
-
In
prod-application.conf
, find the following configuration:db.default { url = ${POSTGRESQL_URL} username = ${POSTGRESQL_USERNAME} password = ${POSTGRESQL_PASSWORD} } lagom.persistence.jdbc.create-tables.auto = false
This will override the defaults defined for development in
application.conf
. You can see that we’ve disabled the automatic creating of tables, since we’ve already created them. -
The
shopping-cart.yaml
file also needs three environment variables: the URL, username, and password. The first is hard coded into the spec, and the second two we’ll consume as secrets. Find the following lines to verify:- name: POSTGRESQL_URL value: "jdbc:postgresql://postgresql/shopping_cart" - name: POSTGRESQL_USERNAME valueFrom: secretKeyRef: name: postgres-shopping-cart key: username - name: POSTGRESQL_PASSWORD valueFrom: secretKeyRef: name: postgres-shopping-cart key: password
If you used a different name for the PostgreSQL database deployment, or for the Kubernetes secrets, you’ll need to update the spec accordingly.
4. View the configuration to connect to Kafka
In Setting up Kafka, we described the requirements for a Kafka instance. Lagom will automatically read an environment variable called KAFKA_SERVICE_NAME
if present, so there’s nothing to add in the Shopping Cart configuration file.
We just need shopping-cart.yaml
to pass the environment variable, pointing to the Kafka service we provisioned. The actual service name that we need to configure needs to match the SRV lookup for the Kafka broker. Our Kafka broker defines a TCP port called clients
. To lookup the IP address or host name and port number, we need to use a service name of _clients._tcp.strimzi-kafka-brokers
.
Find the following lines in shopping-cart.yaml
:
- name: KAFKA_SERVICE_NAME
value: "_clients._tcp.strimzi-kafka-brokers"
5. Configure the service locator
Lagom uses a service locator to look up other microservices. The service locator takes the service name defined in a Lagom service descriptor and translates it into an address to use for communication. In development, when you are running with the runAll
command, Lagom starts up its own development service locator, and injects that into each microservice. You don’t have to worry about service lookup until production. Once out of the development environment, you need to provide a service locator yourself.
Akka provides an API called Akka Discovery that includes a number of backends. Several are compatible with a Kubernetes environment. We’re going to use a service locator implementation built on Akka Discovery, and then we’re going to use the DNS implementation of Akka discovery to discover other services.
-
If using Maven, check the project
pom.xml
to make sure the valuescala.binary.version
is defined there and set to the value of2.12
. -
Check the
pom.xml
in each microservice implementation project or the declaration for each microservice implementation project inbuild.sbt
to make sure the following dependency exists:Java with Maven
<dependency> <groupId>com.lightbend.lagom</groupId> <artifactId>lagom-javadsl-akka-discovery-service-locator_${scala.binary.version}</artifactId> </dependency>
Java with sbt
libraryDependencies += "com.lightbend.lagom" %% "lagom-javadsl-akka-discovery-service-locator" % "1.5.1"
Scala with sbt
libraryDependencies += "com.lightbend.lagom" %% "lagom-scaladsl-akka-discovery-service-locator" % "1.5.1"
-
In the
prod-application.conf
file, the following configures Akka discovery to use DNS as the discovery method:akka.discovery.method = akka-dns
If you’re using Java with Lagom’s Guice backend, this completes the service locator configuration. The
lagom-javadsl-akka-discovery
module automatically loads a Guice module that provides the service locator implementation. If you’re using Scala, you will need to wire in the service locator yourself. -
For Scala, modify your production application cake to mix the Akka discovery service locator components in. Open
com/example/shoppingcart/impl/ShoppingCartLoader.scala
from theshopping-cart/src/main/scala
directory, and modify theload
method as follows:import com.lightbend.lagom.scaladsl.akkadiscovery.AkkaDiscoveryComponents override def load(context: LagomApplicationContext): LagomApplication = new ShoppingCartApplication(context) with AkkaDiscoveryComponents
What’s next
With the microservice configuration and the deployment spec ready, we need to add the Shopping Cart configuration necessary for Forming an Akka Cluster.