Amazon Connect AWS CDK

Amazon Connect is a cloud-based contact center service provided by Amazon Web Services (AWS). It allows businesses to easily set up and manage a contact center in the cloud, providing a flexible and scalable solution for customer support, sales, and service.

With Amazon Connect, businesses can create virtual contact centers that can handle voice, chat, and email interactions with customers. It provides a set of tools and features that enable businesses to create personalized customer experiences, while also improving agent productivity and efficiency.

Key features of Amazon Connect:

  • Interactive Voice Response (IVR): Enables customers to self-serve by navigating through menus and selecting options using their phone’s keypad or voice commands.
  • Automatic Call Distribution (ACD): Routes incoming calls to the appropriate queue or agent based on pre-defined criteria, such as skill set, language, or customer history.
  • Call recording and transcription: Records and transcribes calls for quality assurance and compliance purposes.
  • Real-time and historical analytics: Provides real-time and historical data about call center performance, such as queue metrics, agent activity, and customer feedback.
  • Integration with other AWS services: Integrates with other AWS services, such as Amazon S3, Amazon Kinesis, and Amazon Lex, to provide additional functionality and customization options.

To create routing profiles, phone numbers, and contact flows in Amazon Connect using AWS CDK, you can use the appropriate constructs provided by the aws-cdk-lib/aws-connect package.

Here’s an example CDK script that creates a routing profile, phone number, and contact flow:

//typescript
//
import * as cdk from 'aws-cdk-lib';
import * as connect from 'aws-cdk-lib/aws-connect';

const app = new cdk.App();

const stack = new cdk.Stack(app, 'AmazonConnectStack', {
  env: { account: '<your_aws_account_id>', region: 'us-west-2' },
});

// Define the Amazon Connect instance
const instance = new connect.CfnInstance(stack, 'MyConnectInstance', {
  identityManagementType: 'CONNECT_MANAGED',
  inboundCallsEnabled: true,
  instanceAlias: 'my-connect-instance',
  tags: {
    Name: 'My Amazon Connect Instance',
  },
});

// Define the routing profile
const routingProfile = new connect.CfnRoutingProfile(stack, 'MyRoutingProfile', {
  name: 'My Routing Profile',
  defaultOutboundQueueId: 'arn:aws:connect:us-west-2:<your_aws_account_id>:instance/<instance_id>/queue/<queue_id>',
  queueConfigs: [{
    priority: 1,
    queueReference: {
      id: 'arn:aws:connect:us-west-2:<your_aws_account_id>:instance/<instance_id>/queue/<queue_id>',
    },
  }],
});

// Define the phone number
const phoneNumber = new connect.CfnPhoneNumber(stack, 'MyPhoneNumber', {
  phoneNumber: '+1234567890',
  instanceId: instance.ref,
  productType: 'SIP',
  routingProfileId: routingProfile.ref,
  tags: {
    Name: 'My Phone Number',
  },
});

// Define the contact flow
const contactFlow = new connect.CfnContactFlow(stack, 'MyContactFlow', {
  name: 'My Contact Flow',
  type: 'CONTACT_FLOW',
  content: JSON.stringify({
    version: '13.0',
    start: {
      id: 'f33c6eeb-4131-470c-93d6-f8117f464a0a',
      type: 'Standard',
      branches: [],
      parameters: {},
    },
  }),
});

// Output the phone number ARN and contact flow ARN
new cdk.CfnOutput(stack, 'MyPhoneNumberArn', { value: phoneNumber.attrArn });
new cdk.CfnOutput(stack, 'MyContactFlowArn', { value: contactFlow.attrArn });

In this example, we define a routing profile using the CfnRoutingProfile construct, setting the name and default outbound queue. We also specify a priority and queue reference for the routing profile.

Next, we define a phone number using the CfnPhoneNumber construct, setting the phone number, instance ID, product type, and routing profile ID. We also set a name for the phone number using tags.

Finally, we define a contact flow using the CfnContactFlow construct, setting the name and content of the contact flow. We also output the ARNs for the phone number and contact flow using the CfnOutput construct, allowing us to easily access them for use in other parts of our application.

By using AWS CDK to define and create these resources, we can ensure that our Amazon Connect infrastructure is created and configured in a consistent and repeatable way, making it easier to manage and maintain over time.

Effective Service Reliability monitoring

Service Reliability monitoring is essential for ensuring that systems operate reliably and that potential issues are identified and addressed before they become critical problems. In today’s increasingly digital and connected world, downtime or poor system performance can have a significant impact on business operations and customer experience. This essay will discuss the importance of effective Service Reliability monitoring and best practices for achieving it.

Service Level Indicators (SLIs) and Objectives (SLOs) are critical metrics that measure the performance, availability, and reliability of systems. These metrics provide a baseline for measuring the effectiveness of Service Reliability monitoring. SLIs represent key performance metrics, such as response time or error rates, while SLOs represent the target or acceptable range for these metrics. Well-defined SLIs and SLOs provide clear and measurable objectives for Service Reliability monitoring, which align with business goals.

Implementing a monitoring and alerting system is essential for effective Service Reliability monitoring. A monitoring system collects data on SLIs and triggers alerts when these metrics fall outside of acceptable ranges. An effective monitoring system should provide real-time insights and integrations with other tools. Visualizations such as dashboards help present the data in a clear and easy-to-understand way, enabling quick identification of trends and potential issues.

Monitoring the end-user experience is essential to ensuring customer satisfaction. Metrics such as load times, response times, and error rates are essential for understanding the quality of the user experience. A poor user experience can have a significant impact on customer satisfaction and can ultimately lead to a loss of business.

Dependencies on third-party services, APIs, and databases can also impact system reliability. Monitoring the health and performance of these dependencies is critical for identifying issues that may be impacting the system. Logging tools can capture system logs and track system activity, providing additional insights into system performance.

Regular health checks are essential for identifying potential issues before they become critical problems. Health checks should include checking for configuration errors, security vulnerabilities, and other potential issues. Automation can be used to perform these checks and provide alerts when issues are identified, enabling quick response times.

Analyzing and acting on the data collected from Service Reliability monitoring is critical for continuous improvement. Identifying trends and potential issues can enable proactive measures to be taken, such as making changes to the system architecture or implementing new processes. Collaboration across teams, including development, operations, and business stakeholders, is essential for effective Service Reliability monitoring. All stakeholders should have access to monitoring data and be involved in responding to issues.

In conclusion, effective Service Reliability monitoring is essential for ensuring that systems operate reliably and that potential issues are identified and addressed before they become critical problems. Well-defined SLIs and SLOs provide clear objectives for monitoring, while implementing a monitoring and alerting system provides real-time insights and integrations with other tools. Regular health checks, automation, and collaboration across teams are also essential for effective Service Reliability monitoring. By following these best practices, businesses can improve system performance, enhance the user experience, and ensure customer satisfaction.

How to be Site Reliability Engineering

Site Reliability Engineering (SRE) is a discipline that combines software engineering and operations to ensure the reliability, availability, and performance of a company’s systems. Here are some steps you can take to become an SRE:

  1. Gain a solid foundation in computer science: To become an SRE, you need to have a strong background in computer science, including programming languages, data structures, algorithms, and networking.
  2. Develop strong software engineering skills: SREs must be skilled in software engineering practices, such as version control, automated testing, and deployment.
  3. Acquire experience in operations: SREs must have a deep understanding of operating systems, networking, databases, and infrastructure management.
  4. Familiarize yourself with cloud technologies: SREs often work with cloud-based technologies, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure. It’s important to familiarize yourself with these technologies and understand their capabilities and limitations.
  5. Learn automation tools and technologies: SREs rely heavily on automation to manage and maintain systems at scale. Familiarize yourself with automation tools and technologies such as Puppet, Chef, Ansible, and Terraform.
  6. Understand monitoring and alerting: SREs must be skilled in monitoring and alerting technologies to identify and address potential issues before they become major problems.
  7. Develop excellent communication skills: SREs must be able to communicate effectively with both technical and non-technical stakeholders to explain complex technical concepts in plain language.
  8. Be proactive and able to troubleshoot: SREs must be proactive in identifying potential issues and skilled in troubleshooting when problems do occur.
  9. Be passionate about continuous improvement: SREs must be passionate about improving the reliability, availability, and performance of systems, and must be willing to constantly learn and adapt to new technologies and practices.
  10. Consider pursuing relevant certifications: Certifications such as AWS Certified DevOps Engineer, Google Certified Professional Cloud DevOps Engineer, or Microsoft Certified: Azure DevOps Engineer Expert can demonstrate your expertise in SRE-related technologies and practices.

What is SRE?

Site Reliability Engineering (SRE) is a discipline that incorporates aspects of software engineering and applies them to infrastructure and operations problems. The main goals are to create scalable and highly reliable software systems. In general, an SRE team is responsible for the availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of their service(s).

SRE Principles

  1. Find Service Level
    Service Level Indicator(SLI), Service Level Object(SLO) & Service Level Agreement(SLA) are parameters with which reliability, availability and performance of the service are measured.
  2. Error Budgets
    •An error budget is 1 minus the SLO of the service. A 99.9% SLO service has a 0.1% error budget. If our service receives 1,000,000 requests in four weeks, a 99.9% availability SLO gives us a budget of 1,000 errors over that period.
  3. Eliminate Toil
    Toil is the kind of work tied to running a production service that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows. SRE job is to eliminate as many as Toils by Automating stuff
  4. Automate Everything
    SRE team Automation provides
       – Consistency as systems scale
       – A platform for extending to other systems
       – Faster repairs for common problems
       – Faster action than humans
       – Time savings by decoupling operator from
  5. Support Releases
    Running reliable services requires reliable release processes.
    Continuously build and deploy, including
    – Automating check gates
    – A/B deployments and other methods for checking sanity
            SRE don’t afraid to roll-back a problem release.

DevOps to SRE Tranformation

It is not as easy as you think. And let me explain why?

There are so many misconceptions about the SRE = DevOps. But, it is NOT equal and there are so many things in SRE that DevOps won’t cover. For example, DevOps focus more on Deployment Velocity and application uptime. But, SRE focus on SILOs and Error budgets. DevOps won’t take any authority on deployments nor it influences the deployment velocity. Where SRE can STOP Deploying the application when the Error budget is exceeded. So, this proves SRE has authority in the SDLC process and also it can impact the business owner’s view.


And There is say “SRE Class Implement DevOps”. if we take SRE as one Big function….

SRE(DevOps) {
         ci();
         cd(); 
         mon();
         testing(); 
         ....
}

I would assume that most Organizations are practicing the DevOps that want to jump into SRE to increase their Application or Product uptime and focus on the SILOs.

DevOps to SRE
1. There are not many changes required, easy to get started on your SRE journey.  DevOps is mainly focused CI/CD, automation, and monitoring apps. with this DevOps team easily adapt the SRE culture by Implementing the additional controls in SRE. This is still a big change but i would say that should be the start.

2. You can practice and adopt SRE approach, an experiment in your environment (product) at a low cost. As i mentioned above, we can start with the DevOps controls, and moving forward the Practicing the SRE controls won’t cost.

3. FullStack to SRE Journey. Small and medium enterprise Companies have a limited # of DevOps Engineers following the full-stack engineering model. -That case implemented SRE will be 5 Steps Process.

Fullstack to SRE Journey


4. No knowledge/coverage gaps between SRE/DevOps teams. DevOps acts as a glue between various teams that are creating solutions, dependent on each other, or consists out of distinct pieces of software. So, moving to SRE from DevOps is not going to be challenging.

DevOps to SRE Model

Again, this transformation depends on how teams collaborate with each other and how fast they can adapt to the change. There are a few Good books available for you to learn SRE Approach. But in my view, not all textbooks and theories can teach you with specific teams structure that you have. Understanding the current state is the starting point for the SRE journey.

Here Some of SRE books links:
Site Reliability Engineering: How Google Runs Production Systems (known as “The SRE Book”)
The Site Reliability Workbook: Practical Ways to Implement SRE (known as “The SRE Workbook”)
Seeking SRE: Conversations About Running Production Systems at Scale

Let me know your thoughts in comments.

Vicidial install on Ubuntu 18.04

Updated: Oct-18-2021

Note: Below steps only cover standalone server installation on Ubuntu 18.04.

I am using Digitalocean VPC. Installation and it should be similar in AWS EC2 instances.

Make sure to open 8088,8089,80,443 TCP and 10000 -20000 UDP ports Open in your firewall..

git clone https://github.com/jaganthoutam/vicidial-install-centos7.git
cd vicidial-install-scripts
chmod +x vicidial-install-ubuntu18.sh
./vicidial-install-ubuntu18.sh

While installing Please enter below details:

#Do back to root Directory of vicidial 
cd .. 
perl install.pl

#Fallow the setup with appropriate

#Configiguration example


#Populate ISO country codes 

cd /usr/src/astguiclient/trunk/bin perl ADMIN_area_code_populate.pl 

#update the Server IP with latest IP address.(VICIDIAL DEFAULT IP IS 10.10.10.15) 

perl /usr/src/astguiclient/trunk/bin/ADMIN_update_server_ip.pl --old-server_ip=10.10.10.15 #Say 'Yes' to all
VICIDIAL processes run on screen. There should be 9 Processes running on the screen.
root@vici01:~# screen -ls

There are screens on:

 2240.ASTVDremote (03/21/2019 02:16:03 AM) (Detached)

 2237.ASTVDauto (03/21/2019 02:16:03 AM) (Detached)

 2234.ASTlisten (03/21/2019 02:16:02 AM) (Detached)

 2231.ASTsend (03/21/2019 02:16:02 AM) (Detached)

 2228.ASTupdate (03/21/2019 02:16:02 AM) (Detached)

 2025.ASTconf3way (03/21/2019 02:15:02 AM) (Detached)

 2019.ASTVDadapt (03/21/2019 02:15:02 AM) (Detached)

 1826.asterisk (03/21/2019 02:14:51 AM) (Detached)

 1819.astshell20190321021448 (03/21/2019 02:14:49 AM) (Detached)

9 Sockets in /var/run/screen/S-root.
All Set now. Now, You can configure web interface and logins.
Vicidial Admin login :
http://VICIDIAL_SERVER_IP/vicidial/admin.php
user: 6666
Pass: 1234
Continue On to the Initial Setup
#Add Secure Password for admin and SIP
#Give Super admin access to 6666 user
users —> 6666 –> Change all 0 to 1 in Interface Options.
For WebRTC we need to Run the below Script
chmod +x vicidial-enable-webrtc.sh
./vicidial-enable-webrtc.sh

#Next steps
1. Create Campaign
2. Create SIP Trunk
3. Create Dialplan
4. Upload Leads
5. Register Users to SopftPhone
6. Create Agents/users
Note: If WebRTC enable you don’t need softphone anymore.
………
And Enjoy…
Note: if you building the server for more than 30+ agents, I recommend to use bare metal servers than VPC. 
Please let me know if you have any issues.

ViciDial CentOS 7 Installation Script

OS Version: CentOS Linux release 7.9.2009 (Core)

It been so Many years I never saw any Quick Installation Script for ViciDial. So, decided to Create one.
There are two scripts found in the repo

vicidial-install-centos7.sh Contains full ViciDial installation.
vicidial-enable-webrtc.sh WebRTC with WebPhone Configuration.

There are few pre_requisites you need to do before running ViciDial Installations Scripts.

yum check-update
yum update -y
yum install epel-release -y
yum update -y
yum groupinstall 'Development Tools' -y
yum install git -y
yum install kernel* -y

#Disable SELINUX
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config   
reboot

********Reboot is Necessary********

Now, you can down load the repo and Start ViciDial Installation.

git clone https://github.com/jaganthoutam/vicidial-install-scripts.git
cd vicidial-install-scripts
chmod +x vicidial-install-centos7.sh
./vicidial-install-centos7.sh

If you want Run ViciDial with WebRTC and WebPhone. You need to run Below Script.

chmod +x vicidial-enable-webrtc.sh
./vicidial-enable-webrtc.sh

Gitlab & Runner Install with Private CA SSL

This installation method is used in AWS EKS Cluster to Install Gitlab and Gitlab Kubernetes Executors. 

Tech stack used in this installations:

  • EKS Cluster(2 Node with )
  • Controller EC2 Instance (To Manage the EKS cluster)
  • Helm (Gitlab Installation)
  • SSL certs(Self-Signed/SSL Provider/Private CA)

EKS Cluster:

Creating EKS cluster is not Part of this Discussion. Please fallow this EKS Cluster creation Doc.

Controller EC2 Instance:

Create Ec2 Instance with Proffered, in this case i am using Amazon Linux AMI.(Make Sure that EKS cluster and Controller in Same VPC.) In-Order to maintain the EKS you need kubectl installed in EC2 and also you need to import the kubeconfg from the Cluster. Lets see how we can do that.

And Also, we will be using helm to Install the Gitlab.

Install Kubectl:

https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/linux/amd64/kubect
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
yum install bash-completion
kubectl version --client

Install Kubectl bash completion:

yum install bash-completion
type _init_completion
source /usr/share/bash-completion/bash_completion
type _init_completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl

Get EKS Cluster list and Import kubeconfig:
(replace the –name with Cluster name)

aws eks update-kubeconfig --name <NAME OF THE EKS CLUSTER >

Install Helm:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
cp /usr/local/bin/helm /usr/bin/

Install Helm Auto completion:

helm completion bash >> ~/.bash_completion
. /etc/profile.d/bash_completion.sh
. ~/.bash_completion
source <(helm completion bash)

Now, EC2 instance is ready for the Gitlab installation. Before going to install the Gitlab in EKS. Let create TLS and Generic Secrets for Gitlab and Gitlab-Runner.

You can use any other SSL provider like(Lets Encrypt, Digicert, Comodo …). Here i am using Self Signed Certificates. You can generate Self Signed Certificates with this Link.

Create TLS Secret for Gitlab’s Helm chart Global Values:

kubectl create secret tls gitlab-self-signed --cert=gitlab.gitlabtesting.com.crt --key=gitlab.gitlabtesting.com.key

Here we created secret name gitlab-self-signed with cert and Key. It is better way of mounting the SSL certificate to Ingress.

Create SSL Generic cert Secret:

This will be used for communication between the Gitlab Server and Gitlab-runner Visa SSL. (IMPORTANT: Make sure the filename you mounting Match with the Domain). in this Case my Domain name is gitlab.gitlabtesting.com.

kubectl create secret generic gitlabsr-runner-certs-secret-3 --from-file=gitlab.gitlabtesting.com.crt=gitlab.gitlabtesting.com.crt

Create service account:(This will be used for gitlab-runner to perform actions)

vim gitlab-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: gitlab-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: gitlab
    namespace: kube-system
kubectl apply -f vim gitlab-serviceaccount.yaml

Now that everything ready lets create vaules.yaml for Gitlab Values.

Example file look below.

Add Gitlab Helm to repo:

certmanager-issuer:
  email: [email protected]
certmanager:
  install: false
gitlab:
  sidekiq:
    resources:
      requests:
        cpu: 50m
        memory: 650M
  webservice:
    ingress:
      tls:
        secretName: gitlab-self-signed #TLS Secret we catered above
    resources:
      requests:
        memory: 1.5G
gitlab-runner:
  install: false
  runners:
    privileged: true
global:
  hosts:
    domain: gitlabtesting.com
  ingress:
    tls:
      enabled: true
registry:
  enabled: false
  install: false
  ingress:
    tls:
      secretName: gitlab-self-signed #TLS Secret we catered above
helm repo add gitlab https://charts.gitlab.io/

Install Gitlab with Helm with Values file we created above:

helm install gitlab gitlab/gitlab -f values.yaml

After 5 min, all the pods will be up. You can check with below command and Also get Root password of Gitlab Login:

kubectl get po


#Get Root password:

kubectl get secret gitlab-gitlab-initial-root-password -ojsonpath='{.data.password}' | base64 --decode ; echo

Now Gitlab Installation Completed. You can access the Gitlab with https://gitlab.gitlabtesting.com

Continues….

Kubernetes pods dns issue with kube-flannel.

kubectl -n kube-system logs coredns-6fdfb45d56-2rsxc


.:53
[INFO] plugin/reload: Running configuration MD5 = 8b19e11d5b2a72fb8e63383b064116a1
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:59204->192.168.0.115:53: i/o timeout
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:51845->192.168.0.116:53: i/o timeout
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:49404->192.168.0.115:53: i/o timeout

Debugging DNS Resolution

kubectl exec -ti dnsutils -- nslookup kubernetes.default
Server:    10.0.0.10
Address 1: 10.0.0.10

nslookup: can't resolve 'kubernetes.default'

How to Solve it ?

iptables -P FORWARD ACCEPT
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
systemctl restart docker
systemctl restart kubelet

Apply in Nodes/Master. And check if logs working or not.

Docker Swarm Cluster Kong and Konga Cluster (Centos/8)

We are using kong-konga-compose to deploy the Cluster Kong with Konga.

Preparation: Execute below commands on All nodes.

 systemctl stop firewalld
 systemctl disable firewalld
 systemctl status firewalld
 sed -i s/^SELINUX=.*$/SELINUX=permissive/ /etc/selinux/config
 setenforce 0
 yum update -y
 yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
 sudo curl  https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
 sudo yum makecache
 sudo dnf -y install docker-ce
 sudo dnf -y install  git
 sudo systemctl enable --now docker
 sudo curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
 sudo chmod +x /usr/local/bin/docker-compose && ln -sv /usr/local/bin/docker-compose /usr/bin/docker-compose
 sudo docker-compose --version
 sudo docker --version

in node01:

docker swarm init --advertise-addr MASTERNODEIP


OUTPUT:
  
docker swarm join --token SWMTKN-1-1t1u0xijip6l33wdtt7jpq51blwx0hx3t54088xa4bxjy3yx42-90lf5b4nyyw4stbvcqyrde9sf MASTERNODEIP:2377

in node02:

# The command you find in MASTER NODE.
  
  docker swarm join --token SWMTKN-1-1t1u0xijip6l33wdtt7jpq51blwx0hx3t54088xa4bxjy3yx42-90lf5b4nyyw4stbvcqyrde9sf MASTERNODEIP:2377
  

in node01:

docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
m55wcdrkq0ckmtovuxwsjvgl1 *   master01            Ready               Active              Leader              19.03.8
e9igg0l9tru83ygoys5qcpjv2     node01              Ready               Active                                  19.03.8
  

git clone https://github.com/jaganthoutam/kong-konga-compose.git
  
cd kong-konga-compose

docker stack deploy --compose-file=docker-compose-swarm.yaml kong

#Check Services
  
docker service ls
ID                  NAME                  MODE                REPLICAS            IMAGE                             PORTS
ahucq8qru2xx        kong_kong             replicated          1/1                 kong:1.4.3                        *:8000-8001->8000-8001/tcp, *:8443->8443/tcp
bhf0tdd36isg        kong_kong-database    replicated          1/1                 postgres:9.6.11-alpine
tij6peru7tb8        kong_kong-migration   replicated          0/1                 kong:1.4.3
n0gaj0l6jyac        kong_konga            replicated          1/1                 pantsel/konga:latest              *:1337->1337/tcp
83q1eybkhvvy        kong_konga-database   replicated          1/1                 mongo:4.1.5