Deploying and interacting with Shopping Cart
We’ve created the configuration files and configured them to form the Akka Cluster. We’ve built the docker image and pushed it to the OpenShift registry. Now, we’re ready to deploy.
If you haven’t been creating the files as you go for the guide, but rather are relying on the existing files distributed with the sample app, make sure you have performed the following easy to miss steps:
|
Deploy the Shopping Cart microservice
It is a simple oc
command to deploy the microservice.
-
Deploy as follows:
oc apply -f deploy/shopping-cart.yaml
-
View the pods:
oc get pods
Remember, it could take a few minutes to get the Akka Cluster formed. When it does, you should see something like the following:
shopping-cart-756894d68d-9sltd 0/1 Running 0 9s shopping-cart-756894d68d-bccdv 0/1 Running 0 9s shopping-cart-756894d68d-d8h5j 0/1 Running 0 9s
If you encounter an error, delete the deployment and start again.
View cluster startup in the logs
The logs can be very useful for diagnosing cluster startup problems. To analyze them, it is good to understand what messages will be logged when, and what information they should contain.
By default, the logging during startup is fairly noisy. You may wish to set the logging to a higher threshold, such as warn
, to make the logs quieter, but for now, the verbosity makes it easier to understand what is happening. Also, you will see a lot of info messages when features that depend on the cluster start up, but a cluster has not yet been formed. Typically these messages come from cluster singleton or shard region actors. These messages will stop soon after the cluster is formed, and can be safely ignored.
To view the logs, run:
oc logs -f deployment/shopping-cart
This command shows the logs for the first container in the deployment.
You can also pass the name of a specific pod from the list returned by oc get pods
to see the logs for that pod (the actual name is random so you’ll need to copy from your output, not use the name in this guide):
oc log -f pods/shopping-cart-756894d68d-9sltd
Below is a curated selection of log messages, with much of the extraneous information (such as timestamps, threads, logger names) removed. The messages with callouts are explained below.
[info] Remoting started; listening on addresses:[akka.tcp://application@172.17.0.12:2552] (1)
[info] Cluster Node [akka.tcp://application@172.17.0.12:2552] - Started up successfully
[info] Bootstrap using `akka.discovery` method: kubernetes-api
[info] Binding Akka Management (HTTP) endpoint to: 172.17.0.12:8558
[info] Using self contact point address: http://172.17.0.12:8558 (2)
[info] Looking up [Lookup(shopping-cart,Some(management),Some(tcp))] (3)
[info] Querying for pods with label selector: [app=shopping-cart]. Namespace: [myproject]. Port: [management] (4)
[info] Located service members based on: [Lookup(shopping-cart,Some(management),Some(tcp))]:
[ResolvedTarget(172-17-0-12.myproject.pod.cluster.local,Some(8558),Some(/172.17.0.12)), ResolvedTarget(172-17-0-11.myproject.pod.cluster.local,Some(8558),Some(/172.17.0.11)), ResolvedTarget(172-17-0-13.myproject.pod.cluster.local,Some(8558),Some(/172.17.0.13))] (5)
[info] Discovered [3] contact points, confirmed [0], which is less than the required [3], retrying (6)
[info] Contact point [akka.tcp://application@172.17.0.13:2552] returned [0] seed-nodes [] (7)
[info] Bootstrap request from 172.17.0.12:47312: Contact Point returning 0 seed-nodes ([TreeSet()]) (8)
[info] Exceeded stable margins without locating seed-nodes, however this node 172.17.0.12:8558 is NOT the lowest address out of the discovered endpoints in this deployment, thus NOT joining self. Expecting node [ResolvedTarget(172-17-0-11.myproject.pod.cluster.local,Some(8558),Some(/172.17.0.11))] to perform the self-join and initiate the cluster. (9)
[info] Contact point [akka.tcp://application@172.17.0.11:2552] returned [1] seed-nodes [akka.tcp://application@172.17.0.11:2552] (10)
[info] Joining [akka.tcp://application@172.17.0.12:2552] to existing cluster [akka.tcp://application@172.17.0.11:2552] (11)
[info] Cluster Node [akka.tcp://application@172.17.0.12:2552] - Welcome from [akka.tcp://application@172.17.0.11:2552] (12)
1 | Init messages - showing that remoting has started on port 2552. The IP address should be the pod’s IP address from which other pods can access it, while the port number should match the configured remoting number, which defaults to 2552. |
2 | Init messages for Akka Management - the IP address should be the pod’s IP address, while the port number should be the port number you’ve configured for Akka Management to use, which defaults to 8558. |
3 | The Cluster Bootstrap process is starting. The service name should match your configured service name in Cluster Bootstrap, and the port should match your configured port name. This and subsequent messages will be repeated many times as Cluster Bootstrap polls Kubernetes and the other pods to determine what pods have been started, and whether and where an Akka Cluster has been formed. |
4 | This message comes from the Kubernetes API implementation of Akka discovery, the label selector should be one that will return your pods, and the namespace should match your application’s namespace. |
5 | The Kubernetes API has returned three services, including the current one. |
6 | What Cluster Bootstrap has decided to do with the three services. It found three, but has not yet confirmed whether any of them have joined a cluster. It will continue looking them up, and attempting to contact them, until a Cluster has been formed, or can be started. |
7 | This message will appear many times, it’s the result of probing one of the contact points to find out if it has formed a cluster. |
8 | This message will also appear many times, it’s the result of this pod being probed by another pod to find out if it has formed a Cluster. |
9 | This message may or may not appear, depending on how fast your pods are able to start given the amount of resources. It’s simply informing you that the pod hasn’t located a seed node yet, but it’s not going to try and form a Cluster since it’s not the pod with the lowest IP address. |
10 | Eventually, this message will change to report that one of the pods has formed an Akka Cluster. |
11 | The pod has decided to join an existing Cluster. |
12 | The pod has joined the Cluster. |
Following these messages, you may still see some messages warning that messages can’t be routed. It still may take some time for Cluster singletons and other Akka Cluster features to decide which pod to start up on, but before long, the logs should go quiet as the cluster starts.
The logs above show those of a pod that wasn’t the pod to start the cluster. As mentioned earlier, the default strategy that Akka Cluster Bootstrap uses when it starts and finds that there is no existing cluster is to get the pod with the lowest IP address to start the cluster. In the example above, that pod has an IP address of 172.17.0.11
, and you can see at line 10 that it eventually returns itself as a seed node, which results in this pod joining it.
If you look in the logs of that pod, you’ll see a message like this:
[info] Initiating new cluster, self-joining [akka.tcp://application@172.17.0.11:2552].
Other nodes are expected to locate this cluster via continued contact-point probing.
This message will appear after a timeout called the stable margin, which defaults to 5 seconds, at that point, the pod has seen that there have been no changes to the number of pods deployed for 5 seconds, and so given that it has the lowest IP address, it considers it safe to start a new Akka Cluster.
Things to check if a cluster doesn’t form
If an Akka Cluster is failing to form, carefully check over the logs for the following things:
-
Make sure the right IP addresses are in use. If you see
localhost
or127.0.0.1
used anywhere, that is generally an indication of a misconfiguration. -
Ensure that the namespace, service name, label selector, port name and protocol all match your deployment spec.
-
Ensure that the port numbers match what you’ve configured both in the configuration files and in your deployment spec.
-
Ensure that the required contact point number matches your configuration and the number of replicas you have deployed.
-
Ensure that pods are successfully polling each other, looking for messages such as
Contact point […] returned…
for outgoing polls andBootstrap request from …
for incoming polls from other pods.
Interact with the Shopping Cart service
Once the Shopping Cart microservice is running, and a Cluster has formed, the state should change to ready, this can be seen when running oc get pods
:
shopping-cart-756894d68d-9sltd 1/1 Running 0 9s
shopping-cart-756894d68d-bccdv 1/1 Running 0 9s
shopping-cart-756894d68d-d8h5j 1/1 Running 0 9s
Now you can interact with the microservice as follows:
-
Expose it to the outside world using the
oc expose
command:oc expose svc/shopping-cart
-
Find the hostname to access it by getting the routes:
oc get routes/shopping-cart
You should see something like this:
shopping-cart shopping-cart-myproject.192.168.42.246.nip.io shopping-cart http None
If you’re using Minishift, your hostname will contain a domain name like
192.168.42.246.nip.io
, but not exactly the same as the Minishift IP address is selected at random on startup. Otherwise, it will be the domain name that your OpenShift cluster is running on. For convenient, let’s put the hostname in a shell variable, this will allow you to copy/paste the following commands:SHOPPING_CART_HOST=$(oc get route shopping-cart -o jsonpath='{.spec.host}')
-
Try a simple GET request on the shopping cart service:
curl http://$SHOPPING_CART_HOST/shoppingcart/123
In the response, you should see something like this:
{"id":"123","items":[],"checkedOut":false}
-
Add some items to the shopping cart:
curl -H "Content-Type: application/json" -X POST -d '{"productId": "456", "quantity": 2}' http://$SHOPPING_CART_HOST/shoppingcart/123 curl -H "Content-Type: application/json" -X POST -d '{"productId": "789", "quantity": 3}' http://$SHOPPING_CART_HOST/shoppingcart/123 curl http://$SHOPPING_CART_HOST/shoppingcart/123
-
Checkout the shopping cart
curl http://$SHOPPING_CART_HOST/shoppingcart/123/checkout -X POST
At this point, the shopping cart service should publish a message to Kafka. Let’s deploy the Inventory microservice to consume that message.
Deploy the Inventory microservice
We won’t go into the details of how to configure the inventory service and its deployment descriptor since it’s just a subset of what’s needed for the shopping cart service - it doesn’t need to form a cluster, it doesn’t have a database, it just has a single node and needs to connect to Kafka. After you have enabled the build for the Inventory microservice, build and push the inventory service to the docker registry:
sbt |
|
Maven |
|
Configure the image lookup, create the application secret, apply the deployment spec, and expose the service:
oc set image-lookup inventory
oc create secret generic inventory-application-secret --from-literal=secret="$(openssl rand -base64 48)"
oc apply -f deploy/inventory.yaml
oc expose svc/inventory
Find the inventory service hostname and set it in a shell variable:
INVENTORY_HOST=$(oc get route inventory -o jsonpath='{.spec.host}')
Use the shell variable to query the inventory of one of the products that we just checked out from the shopping cart:
curl http://$INVENTORY_HOST/inventory/456
Since earlier we added two of product 456
to our shopping cart, and we haven’t added anything to the inventory for that product id, if the inventory service has successfully consumed the checkout message, we expect the current inventory to be -2.
Summary
As you can see, deploying a stateful service to ensure responsiveness, scalability, and reliability at both the orchestration and application level includes the configuration of multiple components. A Lightbend Platform subscripion offers assistance and support to ensure your success.