In this section you’ll be performing the following setup and deployment steps: - Create a two node EKS cluster - Deploy the application to EKS - Deploy the server agent to EKS
Let’s start by looking at the script that creates the EKS cluster using the commands below in your Cloud9 terminal:
cd /home/ec2-user/environment/modernization_workshop
cat -n create_eks_cluster.sh
The script calls eksctl passing the cluster.yaml file to provide the cluster definition.
Take a quick look at the cluster.yaml file using the commands below:
cd /home/ec2-user/environment/modernization_workshop/applications/post-modernization
cat -n cluster.yaml
Notice where we’ve defined the name of the cluster, the region and availability zones to deploy to, the instance type for the nodes. and the number of nodes.
Notice that the default availability zones defined in the cluster.yaml file are a and c on line number 9 in the image below. It is advisable to run the command below to check that both of those availability zones are supported in your region. You can use the command below to check the supported availability zones in your region. You can read more information about checking availability zones here. You may need to edit the cluster.yaml file to change the availability zones to the ones supported in your region before your create the EKS cluster.
Example command to check for supported availability zones if your region is us-east-2
aws ec2 describe-availability-zones --region us-east-2
You must have enough available VPCs, Elastic IPs, and NAT Gateways in the region you are working in to successfully create an EKS Cluster with a managed node group of 2 nodes. If you run into a problem during the setup, it is usually associated with insufficient resources or permissions in your AWS account. You can resolve resource constraints by requesting a quota increase for your AWS account.
Go ahead and create the EKS cluster using the commands below: - cluster creation takes ~16 minutes to finish so please be patient and let the process complete
cd /home/ec2-user/environment/modernization_workshop
./create_eks_cluster.sh
The image below shows what the output looks like for the EKS cluster creation.
Now that the EKS cluster is up and running, use the commands below to deploy the modernized application to EKS: - deployment and initialization takes ~3 minutes to finish
cd /home/ec2-user/environment/modernization_workshop
./deploy_eks_application.sh
The image below shows what the output looks like for the application deployment and initialization.
Use the commands below to deploy the AppDynamics agents to the EKS Kubernetes cluster.
cd /home/ec2-user/environment/modernization_workshop
./deploy_appdynamics_agents.sh
The output should look like the image below.
Though there are several different ways to deploy these agents, we’ve used the AppDynamics Helm Chart that simplified the deployment of agents to the Kubernetes clusters.
Below is the list of agents deployed by the Helm chart:
In our deployment we are overriding the helm chart default values.yaml file to provide the specific configuration for our environment.
Use the command below in your Cloud9 terminal to view the template file used to generate the final version of the values.yaml file that is used in our deployment.
cat -n /home/ec2-user/environment/modernization_workshop/applications/post-modernization/clusteragent/values-ca1.yaml
Now let’s take a look at the Kubernetes deployment Yaml file for the AccountManagement service deployment to see how we used the label match from the Auto-Instrumentation section of the values-ca1.yaml used to deploy the AppDynamics agents, using the command below.
cat -n /home/ec2-user/environment/modernization_workshop/applications/post-modernization/application/account-management.yaml
The AppDynamics Database Agent is a standalone Java program that collects performance metrics about your database instances and database servers. You can deploy the Database Agent on any machine running Java 1.8 or higher. The machine must have network access to the AppDynamics Controller and the database instance that you want to be monitored.
A single database agent can monitor multiple databases at once by using multiple database collector configurations. The workshop setup utility installed the agent on your Cloud9 instance and also created two database collector configurations inside the controller. You can see the running agent process by using the command below:
ps -ef | grep db-agent
In the case of the database agent, JVM command line properties were used to connect the agent to the controller. Take notice of the “-Ddbagent.name” property. This property is used to link a specific database agent to a database collector configuration. Most of these properties could have optionally been defined in the agents configuration file, which can be seen by using the commands below:
cd /opt/appdynamics/dbagent/conf
cat controller-info.xml
The image below shows an example of the database collector configuration to monitor the MySQL database used in the post-modernized EKS application.
Once the database agent is running with a specific name, its name should appear in the drop-down of available agents so you can associate it with the collector you’re creating. Since this is an RDS database, we have used the RDS endpoint as the hostname. We can use the port “3306” for the DB since we have exposed it specifically in the VPC Security Group for the database.
You can see that we have used the “root” username and password to connect to the database and monitor it. The recommended practice is to create a specific database user in the database and apply specific permissions to it for monitoring purposes.
You can read more about deploying the database agent and configuring collectors here and here
Let’s follow Alex and his team as they utilize AppDynamics, having purpose built integrations for AWS that provides application performance monitoring continuity throughout the entire application modernization architecture lifecycle.