Posted on

Vicidial install on Ubuntu 18.04

Updated: Oct-18-2021

Note: Below steps only cover standalone server installation on Ubuntu 18.04.

I am using Digitalocean VPC. Installation and it should be similar in AWS EC2 instances.

Make sure to open 8088,8089,80,443 TCP and 10000 -20000 UDP ports Open in your firewall..

git clone https://github.com/jaganthoutam/vicidial-install-centos7.git
cd vicidial-install-scripts
chmod +x vicidial-install-ubuntu18.sh
./vicidial-install-ubuntu18.sh

While installing Please enter below details:

#Do back to root Directory of vicidial 
cd .. 
perl install.pl

#Fallow the setup with appropriate

#Configiguration example


#Populate ISO country codes 

cd /usr/src/astguiclient/trunk/bin perl ADMIN_area_code_populate.pl 

#update the Server IP with latest IP address.(VICIDIAL DEFAULT IP IS 10.10.10.15) 

perl /usr/src/astguiclient/trunk/bin/ADMIN_update_server_ip.pl --old-server_ip=10.10.10.15 #Say 'Yes' to all
VICIDIAL processes run on screen. There should be 9 Processes running on the screen.
[email protected]:~# screen -ls

There are screens on:

 2240.ASTVDremote (03/21/2019 02:16:03 AM) (Detached)

 2237.ASTVDauto (03/21/2019 02:16:03 AM) (Detached)

 2234.ASTlisten (03/21/2019 02:16:02 AM) (Detached)

 2231.ASTsend (03/21/2019 02:16:02 AM) (Detached)

 2228.ASTupdate (03/21/2019 02:16:02 AM) (Detached)

 2025.ASTconf3way (03/21/2019 02:15:02 AM) (Detached)

 2019.ASTVDadapt (03/21/2019 02:15:02 AM) (Detached)

 1826.asterisk (03/21/2019 02:14:51 AM) (Detached)

 1819.astshell20190321021448 (03/21/2019 02:14:49 AM) (Detached)

9 Sockets in /var/run/screen/S-root.
All Set now. Now, You can configure web interface and logins.
Vicidial Admin login :
http://VICIDIAL_SERVER_IP/vicidial/admin.php
user: 6666
Pass: 1234
Continue On to the Initial Setup
#Add Secure Password for admin and SIP
#Give Super admin access to 6666 user
users —> 6666 –> Change all 0 to 1 in Interface Options.
For WebRTC we need to Run the below Script
chmod +x vicidial-enable-webrtc.sh
./vicidial-enable-webrtc.sh

#Next steps
1. Create Campaign
2. Create SIP Trunk
3. Create Dialplan
4. Upload Leads
5. Register Users to SopftPhone
6. Create Agents/users
Note: If WebRTC enable you don’t need softphone anymore.
………
And Enjoy…
Note: if you building the server for more than 30+ agents, I recommend to use bare metal servers than VPC. 
Please let me know if you have any issues.
Posted on Leave a comment

ViciDial CentOS 7 Installation Script

OS Version: CentOS Linux release 7.9.2009 (Core)

It been so Many years I never saw any Quick Installation Script for ViciDial. So, decided to Create one.
There are two scripts found in the repo

vicidial-install-centos7.sh Contains full ViciDial installation.
vicidial-enable-webrtc.sh WebRTC with WebPhone Configuration.

There are few pre_requisites you need to do before running ViciDial Installations Scripts.

yum check-update
yum update -y
yum install epel-release -y
yum update -y
yum groupinstall 'Development Tools' -y
yum install git -y
yum install kernel* -y

#Disable SELINUX
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config   
reboot

********Reboot is Necessary********

Now, you can down load the repo and Start ViciDial Installation.

git clone https://github.com/jaganthoutam/vicidial-install-scripts.git
cd vicidial-install-scripts
chmod +x vicidial-install-centos7.sh
./vicidial-install-centos7.sh

If you want Run ViciDial with WebRTC and WebPhone. You need to run Below Script.

chmod +x vicidial-enable-webrtc.sh
./vicidial-enable-webrtc.sh
Facebook Comments Box
Posted on Leave a comment

Gitlab & Runner Install with Private CA SSL

This installation method is used in AWS EKS Cluster to Install Gitlab and Gitlab Kubernetes Executors. 

Tech stack used in this installations:

  • EKS Cluster(2 Node with )
  • Controller EC2 Instance (To Manage the EKS cluster)
  • Helm (Gitlab Installation)
  • SSL certs(Self-Signed/SSL Provider/Private CA)

EKS Cluster:

Creating EKS cluster is not Part of this Discussion. Please fallow this EKS Cluster creation Doc.

Controller EC2 Instance:

Create Ec2 Instance with Proffered, in this case i am using Amazon Linux AMI.(Make Sure that EKS cluster and Controller in Same VPC.) In-Order to maintain the EKS you need kubectl installed in EC2 and also you need to import the kubeconfg from the Cluster. Lets see how we can do that.

And Also, we will be using helm to Install the Gitlab.

Install Kubectl:

https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/linux/amd64/kubect
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
yum install bash-completion
kubectl version --client

Install Kubectl bash completion:

yum install bash-completion
type _init_completion
source /usr/share/bash-completion/bash_completion
type _init_completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl

Get EKS Cluster list and Import kubeconfig:
(replace the –name with Cluster name)

aws eks update-kubeconfig --name <NAME OF THE EKS CLUSTER >

Install Helm:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
cp /usr/local/bin/helm /usr/bin/

Install Helm Auto completion:

helm completion bash >> ~/.bash_completion
. /etc/profile.d/bash_completion.sh
. ~/.bash_completion
source <(helm completion bash)

Now, EC2 instance is ready for the Gitlab installation. Before going to install the Gitlab in EKS. Let create TLS and Generic Secrets for Gitlab and Gitlab-Runner.

You can use any other SSL provider like(Lets Encrypt, Digicert, Comodo …). Here i am using Self Signed Certificates. You can generate Self Signed Certificates with this Link.

Create TLS Secret for Gitlab’s Helm chart Global Values:

kubectl create secret tls gitlab-self-signed --cert=gitlab.gitlabtesting.com.crt --key=gitlab.gitlabtesting.com.key

Here we created secret name gitlab-self-signed with cert and Key. It is better way of mounting the SSL certificate to Ingress.

Create SSL Generic cert Secret:

This will be used for communication between the Gitlab Server and Gitlab-runner Visa SSL. (IMPORTANT: Make sure the filename you mounting Match with the Domain). in this Case my Domain name is gitlab.gitlabtesting.com.

kubectl create secret generic gitlabsr-runner-certs-secret-3 --from-file=gitlab.gitlabtesting.com.crt=gitlab.gitlabtesting.com.crt

Create service account:(This will be used for gitlab-runner to perform actions)

vim gitlab-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: gitlab-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: gitlab
    namespace: kube-system
kubectl apply -f vim gitlab-serviceaccount.yaml

Now that everything ready lets create vaules.yaml for Gitlab Values.

Example file look below.

Add Gitlab Helm to repo:

certmanager-issuer:
  email: [email protected]
certmanager:
  install: false
gitlab:
  sidekiq:
    resources:
      requests:
        cpu: 50m
        memory: 650M
  webservice:
    ingress:
      tls:
        secretName: gitlab-self-signed #TLS Secret we catered above
    resources:
      requests:
        memory: 1.5G
gitlab-runner:
  install: false
  runners:
    privileged: true
global:
  hosts:
    domain: gitlabtesting.com
  ingress:
    tls:
      enabled: true
registry:
  enabled: false
  install: false
  ingress:
    tls:
      secretName: gitlab-self-signed #TLS Secret we catered above
helm repo add gitlab https://charts.gitlab.io/

Install Gitlab with Helm with Values file we created above:

helm install gitlab gitlab/gitlab -f values.yaml

After 5 min, all the pods will be up. You can check with below command and Also get Root password of Gitlab Login:

kubectl get po


#Get Root password:

kubectl get secret gitlab-gitlab-initial-root-password -ojsonpath='{.data.password}' | base64 --decode ; echo

Now Gitlab Installation Completed. You can access the Gitlab with https://gitlab.gitlabtesting.com

Continues….

Facebook Comments Box
Posted on Leave a comment

Kubernetes pods dns issue with kube-flannel.

kubectl -n kube-system logs coredns-6fdfb45d56-2rsxc


.:53
[INFO] plugin/reload: Running configuration MD5 = 8b19e11d5b2a72fb8e63383b064116a1
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:59204->192.168.0.115:53: i/o timeout
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:51845->192.168.0.116:53: i/o timeout
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:49404->192.168.0.115:53: i/o timeout

Debugging DNS Resolution

kubectl exec -ti dnsutils -- nslookup kubernetes.default
Server:    10.0.0.10
Address 1: 10.0.0.10

nslookup: can't resolve 'kubernetes.default'

How to Solve it ?

iptables -P FORWARD ACCEPT
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
systemctl restart docker
systemctl restart kubelet

Apply in Nodes/Master. And check if logs working or not.

Facebook Comments Box
Posted on Leave a comment

Docker Swarm Cluster Kong and Konga Cluster (Centos/8)

We are using kong-konga-compose to deploy the Cluster Kong with Konga.

Preparation: Execute below commands on All nodes.

 systemctl stop firewalld
 systemctl disable firewalld
 systemctl status firewalld
 sed -i s/^SELINUX=.*$/SELINUX=permissive/ /etc/selinux/config
 setenforce 0
 yum update -y
 yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
 sudo curl  https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
 sudo yum makecache
 sudo dnf -y install docker-ce
 sudo dnf -y install  git
 sudo systemctl enable --now docker
 sudo curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
 sudo chmod +x /usr/local/bin/docker-compose && ln -sv /usr/local/bin/docker-compose /usr/bin/docker-compose
 sudo docker-compose --version
 sudo docker --version

in node01:

docker swarm init --advertise-addr MASTERNODEIP


OUTPUT:
  
docker swarm join --token SWMTKN-1-1t1u0xijip6l33wdtt7jpq51blwx0hx3t54088xa4bxjy3yx42-90lf5b4nyyw4stbvcqyrde9sf MASTERNODEIP:2377

in node02:

# The command you find in MASTER NODE.
  
  docker swarm join --token SWMTKN-1-1t1u0xijip6l33wdtt7jpq51blwx0hx3t54088xa4bxjy3yx42-90lf5b4nyyw4stbvcqyrde9sf MASTERNODEIP:2377
  

in node01:

docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
m55wcdrkq0ckmtovuxwsjvgl1 *   master01            Ready               Active              Leader              19.03.8
e9igg0l9tru83ygoys5qcpjv2     node01              Ready               Active                                  19.03.8
  

git clone https://github.com/jaganthoutam/kong-konga-compose.git
  
cd kong-konga-compose

docker stack deploy --compose-file=docker-compose-swarm.yaml kong

#Check Services
  
docker service ls
ID                  NAME                  MODE                REPLICAS            IMAGE                             PORTS
ahucq8qru2xx        kong_kong             replicated          1/1                 kong:1.4.3                        *:8000-8001->8000-8001/tcp, *:8443->8443/tcp
bhf0tdd36isg        kong_kong-database    replicated          1/1                 postgres:9.6.11-alpine
tij6peru7tb8        kong_kong-migration   replicated          0/1                 kong:1.4.3
n0gaj0l6jyac        kong_konga            replicated          1/1                 pantsel/konga:latest              *:1337->1337/tcp
83q1eybkhvvy        kong_konga-database   replicated          1/1                 mongo:4.1.5   
Facebook Comments Box
Posted on Leave a comment

ElasticSearch Filebeat custom index

Custom Template and Index pattern setup.

    setup.ilm.enabled: false               #Set ilm to False 
    setup.template.name: "k8s-dev"         #Create Custom Template
    setup.template.pattern: "k8s-dev-*"    #Create Custom Template pattern
    setup.template.settings:
      index.number_of_shards: 1    #Set number_of_shards 1, ONLY if you have ONE NODE ES
      index.number_of_replicas: 0#Set number_of_replicas 1, ONLY if you have ONE NODE ES
    output.elasticsearch:
       hosts: ['192.168.1.142:9200']
       index: "k8s-dev-%{+yyyy.MM.dd}" #Set k8s-dev-2020.01.01 as Index name

filebeat-kubernetes.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      node: ${NODE_NAME}
    #      hints.enabled: true
    #      hints.default_config:
    #        type: container
    #        paths:
    #          - /var/log/containers/*${data.kubernetes.container.id}.log

    processors:
      - add_cloud_metadata:
      - add_host_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    setup.ilm.enabled: false
    setup.template.name: "k8s-dev"
    setup.template.pattern: "k8s-dev-*"
    setup.template.settings:
      index.number_of_shards: 1
      index.number_of_replicas: 0

    output.elasticsearch:
       hosts: ['192.168.1.142:9200']
       index: "k8s-dev-%{+yyyy.MM.dd}"

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.6.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: "192.168.1.142"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---

Index will appear in kibana:

Create index pattern in Kibana:

Finally:…

Facebook Comments Box
Posted on Leave a comment

Clone all git repos from Organisation

Using below command you can clone all the git repos at once.

for i in `curl -u USERNAME:TOKEN_HERE -s "https://api.github.com/orgs/ottonova/repos?per_page=200" |grep ssh_url | cut -d ':' -f 2-3|tr -d '",'`; do git clone $i; done

Facebook Comments Box
Posted on Leave a comment

Mac cleanup

#user cache file
echo "cleaning user cache file from ~/Library/Caches"
rm -rf ~/Library/Caches/*
echo "done cleaning from ~/Library/Caches"
#user logs
echo "cleaning user log file from ~/Library/logs"
rm -rf ~/Library/logs/*
echo "done cleaning from ~/Library/logs"
#user preference log
echo "cleaning user preference logs"
#rm -rf ~/Library/Preferences/*
echo "done cleaning from /Library/Preferences"
#system caches
echo "cleaning system caches"
sudo rm -rf /Library/Caches/*
echo "done cleaning system cache"
#system logs
echo "cleaning system logs from /Library/logs"
sudo rm -rf /Library/logs/*
echo "done cleaning from /Library/logs"
echo "cleaning system logs from /var/log"
sudo rm -rf /var/log/*
echo "done cleaning from /var/log"
echo "cleaning from /private/var/folders"
sudo rm -rf /private/var/folders/*
echo "done cleaning from /private/var/folders"
#ios photo caches
echo "cleaning ios photo caches"
rm -rf ~/Pictures/iPhoto\ Library/iPod\ Photo\ Cache/*
echo "done cleaning from ~/Pictures/iPhoto Library/iPod Photo Cache"
#application caches
echo "cleaning application caches"
for x in $(ls ~/Library/Containers/) 
do 
    echo "cleaning ~/Library/Containers/$x/Data/Library/Caches/"
    rm -rf ~/Library/Containers/$x/Data/Library/Caches/*
    echo "done cleaning ~/Library/Containers/$x/Data/Library/Caches"
done
echo "done cleaning for application caches"
Facebook Comments Box
Posted on Leave a comment

Docker alias

alias dm=’docker-machine’
alias dmx=’docker-machine ssh’
alias dk=’docker’
alias dki=’docker images’
alias dks=’docker service’
alias dkrm=’docker rm’
alias dkl=’docker logs’
alias dklf=’docker logs -f’
alias dkflush=’docker rm `docker ps –no-trunc -aq`’
alias dkflush2=’docker rmi $(docker images –filter “dangling=true” -q –no-trunc)’
alias dkt=’docker stats –format “table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}”‘
alias dkps=”docker ps –format ‘{{.ID}} ~ {{.Names}} ~ {{.Status}} ~ {{.Image}}'”

Facebook Comments Box
Posted on

kubectl commands from slack

How cool it is to run the kubectlcommands from slack channel… 🙂
This is not fully developed yet, but it comes in handy with dev, staging ENV.
Let’s Begin.

Requirements:
1. create a new slack bot
2. Create Slack Channel(not private), and get the channel ID.
https://slack.com/api/channels.list?token=REPLACE_TOKEN&pretty=1 to get the Channel ID here.
3. Add Slack-bot to the channel you created.
4. Then use the below file to create K8s Deployment.

---
#create servuce account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubebot-user
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubebot-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
#  Create ClusterRole with Ref NOTE : clsuter-admin is more powerful
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubebot-user
  namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubebot
  labels:
    component: kubebot
spec:
  replicas: 1
  template:
    metadata:
      labels:
        component: kubebot
    spec:
      serviceAccountName: kubebot-user
      containers:
      - name: kubebot
        image: harbur/kubebot:0.1.0
        imagePullPolicy: Always
        env:
        # Create a secret with your slack bot token and reference it here
        - name: KUBEBOT_SLACK_TOKEN
          value: TOKENID_THAT_WAS CREATED
        # Alternatively, use this instead if you don't need to put channel ids in a secret; use a space as a separator
        # You get this from  https://slack.com/api/channels.list?token=REPLACE_TOKEN&pretty=1 
        - name: KUBEBOT_SLACK_CHANNELS_IDS
          value: 'AABBCCDD'          
        # Specify slack admins that kubebot should listen to; use a space as a separator
        - name: KUBEBOT_SLACK_ADMINS_NICKNAMES
          value: 'jag test someone'
        # Specify valid kubectl commands that kubebot should support; use a space as a separator
        - name: KUBEBOT_SLACK_VALID_COMMANDS
          value: "get describe logs explain"

Let me know if this help.

Facebook Comments Box