Posted on

Kops AWS infra Automation


This example Project will help you to create KOPs cluster on multiple AZ but limited to the Single region.

Assume that you have AWS CLI installed and IAM user configured.

The IAM user to create the Kubernetes cluster must have the following permissions:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • IAMFullAccess
  • AmazonVPCFullAccess


  1. Terraform (Note you need to install 0.11.7 Version)
  2. Install kops (WE ARE USING kops 1.8.1 for now)

For Mac

brew update && brew install kops


curl -Lo kops
chmod +x ./kops
sudo mv ./kops /usr/local/bin/

For Linux

wget -O kops
chmod +x ./kops
sudo mv ./kops /usr/local/bin/
  1. Install kubectl

For Mac

curl -LO
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

For Ubuntu

curl -LO
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Getting started

Replace with your public zone name

vim example/
variable "domain_name" {
  default = ""

Edit cluster details. node_asg_desired,instance_key_name etc..

vim example/

**** Edit module according to insfra name *****

module "staging" {
  source                    = "../module"
  source                    = "./"
  kubernetes_version        = "1.8.11"
  sg_allow_ssh              = "${}"
  sg_allow_http_s           = "${}"
  cluster_name              = "staging"
  cluster_fqdn              = "staging.${}"
  route53_zone_id           = "${}"
  kops_s3_bucket_arn        = "${aws_s3_bucket.kops.arn}"
  kops_s3_bucket_id         = "${}"
  vpc_id                    = "${}"
  instance_key_name         = "${var.key_name}"
  node_asg_desired          = 3
  node_asg_min              = 3
  node_asg_max              = 3
  master_instance_type      = "t2.medium"
  node_instance_type        = "m4.xlarge"
  internet_gateway_id       = "${}"
  public_subnet_cidr_blocks = ["${local.staging_public_subnet_cidr_blocks}"]
  kops_dns_mode             = "private"

If you want Force single master. (Can be used when a master per AZ is not required or if running in a region with only 2 AZs).

vim module/ 

**** force_single_master should be true if you want single master ****

variable "force_single_master" {
   default = true

ALl good now. You can run Terraform plan to see if you get any errors. If everything clean just run “terraform apply” to build cluster.

cd example
terrafrom plan

(Output something like below)
  + module.staging.null_resource.delete_tf_files
      id:                                                 <computed>

Plan: 6 to add, 0 to change, 1 to destroy.


MASTER_ELB_CLUSTER1=$(terraform state show module.staging.aws_elb.master | grep dns_name | cut -f2 -d= | xargs)
kubectl config set-cluster --insecure-skip-tls-verify=true --server=https://$MASTER_ELB_CLUSTER1

And then test:

kubectl cluster-info
Kubernetes master is running at
KubeDNS is running at

kubectl get nodes
NAME                                          STATUS    ROLES     AGE       VERSION    Ready     master    9m        v1.8.11    Ready     node      3m        v1.8.11   Ready     node      27s       v1.8.11   Ready     node      2m        v1.8.11

Credits: Original code is taken from here.

Facebook Comments Box
Posted on


Setting up the Salt-Master

Salt servers have two types, Master and Minion. The master server is the server that hosts all of the policies and configurations and pushes those to the various minions. The minions, are the infrastructure that you want managed. All of the pushed information is communicated via ZeroMQ; this communication is also encrypted and minions must be authenticated on the master before receiving any commands/configurations.

Installing on Ubuntu

I will be showing you how to install Salt on Ubuntu; however if you want to install Salt on other distributions you can find instructions and a bootstrap script at

Installing Python Software Properties

Saltstack maintains a PPA (Personal Package Archive) that can be added as an apt repository. On my systems before I could add a PPA Repository I had to install the python-software-properties package.

[email protected]:~# apt-get --yes -q install python-software-properties

Adding the SaltStack PPA Repository

[email protected]:~# add-apt-repository ppa:saltstack/salt
You are about to add the following PPA to your system:
 Salt, the remote execution and configuration management tool.
 More info:
Press [ENTER] to continue or ctrl-c to cancel adding it

Make sure that you press [ENTER] otherwise the repository will not be added.

Update Apt’s Package Indexes

After adding the repository make sure that you update Apt’s package index.

[email protected]:~# apt-get --yes -q update

Install The Salt-Master package

[email protected]:~# apt-get --yes -q install salt-master

Configuring The Salt Master

Now that Salt has been installed, we will configure the master server. Unlike many other tools the configuration of SaltStack is pretty simple. This article is going to show a very simple “get you up and running” configuration. I will make sure to cover more advanced configurations in later articles.

In order to configure the salt master we will need to edit the /etc/salt/masterconfiguration file.

[email protected]:~# vi /etc/salt/master

Changing the bind interface

Salt is not necessarily push only, the salt minions can also send requests to the salt master. In order to ensure that this happens we will need to tell salt which network interface to listen to.


# The address of the interface to bind to

Replace with:

# The address of the interface to bind to
interface: youripaddress


# The address of the interface to bind to

Setting the states file_roots directory

All of salt’s policies or rather salt “states” need to live somewhere. The file_roots directory is the location on disk for these states. For this article we will place everything into /salt/states/base.


#- /srv/salt

Replace with:

    - /salt/states/base

Not all states are the same, sometimes you may want a package to be configured one way in development and another in production. While we won’t be covering it yet in this article you can do this by using salt’s “environments” configuration.

Each salt master must have a base environment, this is used to house the top.sls file which defines which salt states apply to specific minions. The base environment is also used in general for states that would apply to all systems.

For example, I love the screen command and want it installed on every machine I manage. To do this I add the screen state into the base environment.

To add additional environments simply append them to the file_rootsconfiguration.

Adding the development environment:

    - /salt/states/base
    - /salt/states/dev

Setting the pillar_roots

While this article is not going to cover pillars (I will add more articles for salt don’t worry) I highly suggest configuring the pillar_roots directories as well. I have found that pillars are extremely useful for reusing state configuration and reducing the amount of unique state configurations.


#- /srv/pillar


    - /salt/pillars/base

Pillars also understand environments, the method to adding additional environments is the same as it was for file_roots.

Restart the salt-master service

That’s all of the editing that we need to perform for a basic salt installation. For the settings to take effect we will need to restart the salt-master service.

[email protected]:~# service salt-master restart
 salt-master stop/waiting
 salt-master start/running, process 1036

Creating the salt states and pillars directories

Before we move on to the salt minion’s installation we should create the file_roots and pillar_roots directories that we specified in /etc/salt/master.

[email protected]:~# mkdir -p /salt/states/base /salt/pillars/base

Setting up the Salt-Minion

Now that the salt master is setup and configured we will need to install the salt-minion package on all of the systems we want salt to manage for us. Theoretically once these minions have been connected to the salt master, you could get away with never logging into these systems again.

Installing on Ubuntu

The below process can be repeated on as many minions as needed.

Installing Python Software Properties

[email protected]:~# apt-get --yes -q install python-software-properties

Adding the SaltStack PPA Repository

[email protected]:~# add-apt-repository ppa:saltstack/salt
You are about to add the following PPA to your system:
 Salt, the remote execution and configuration management tool.
 More info:
Press [ENTER] to continue or ctrl-c to cancel adding it

Make sure that you press [ENTER] otherwise the repository will not be added.

Update Apt’s Package Indexes

After adding the repository make sure that you update Apt’s package index.

[email protected]:~# apt-get --yes -q update

Install The Salt-Minion package

[email protected]:~# apt-get --yes -q install salt-minion

Configuring the Salt-Minion

Configuring the salt minion is even easier than the salt master. In simple implementations like the one we are performing today all we need to do is set the salt master IP address.

[email protected]:~# vi /etc/salt/minion

Changing the Salt-Master target IP


#master: salt

Replace with:

master: yourmasterip



By default the salt-minion package will try to resolve the “salt” hostname. A simple trick is to set the “salt” hostname to resolve to your salt-master’s IP in the /etc/hosts file and allow the salt-master to push a corrected /etc/salt/minion configuration file. This trick let’s you setup a salt minion server without having to edit the minion configuration file.

Restarting the salt-minion service

In order for the configuration changes to take effect, we must restart the salt-minion service.

[email protected]:~# service salt-minion restart
salt-minion stop/waiting
salt-minion start/running, process 834

Accepting the Minions key on the Salt-Master

Once the salt-minion service is restarted the minion will start trying to communicate with the master. Before that can happen we must accept the minions key on the master.

On the salt master list the salt-key’s

We can see what keys are pending acceptance by running the salt-key command.

[email protected]:~# salt-key -L
**Accepted Keys:**
**Unaccepted Keys:**
**Rejected Keys:**

Accept the saltminion’s key

To accept the saltminion’s key we can do this two ways, via the saltminions specific name or accept all pending keys.

Accept by name:
[email protected]:~# salt-key -a saltminion
The following keys are going to be accepted:
Unaccepted Keys:
Proceed? [n/Y] Y
Key for minion saltminion accepted.

Accept all keys:
[email protected]:~# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
Proceed? [n/Y] Y
Key for minion saltminion accepted.

Installing and Configuring nginx with SaltStack

While the above information gets you started with Salt, it doesn’t explain how to use Salt to install a package. The below steps will outline how to install a package and deploy configuration files using Salt.

Creating the nginx state

SaltStack has policies just like any other configuration automation tools, however in Salt they are referred to as “states”. You can think of these as the desired states of the items being configured.

Creating the nginx state directory and file

Each state in salt needs a sub-directory in the respective environment. Because we are going to use this state to install and configure nginx I will name our state nginx and I am placing it within our base environment.

[email protected]:~# mkdir /salt/states/base/nginx

Once the directory is created we will need to create the “init.sls” file.

[email protected]:~# vi /salt/states/base/nginx/init.sls

Specifying the nginx state

Now that we have the Salt State file open, we can start adding the desired state configuration. The Salt State files by default utilize the YAML format. By using YAML these files are very easy to read and easier to write.

Managing the nginx package and service

The following configuration will install the nginx package and ensure the nginx service is running. As well as watch the package nginx and nginx.conf file for updates. If these two items are updated the service nginx will be automatically restarted the next time salt is run against the minions.

Add the following to init.sls:

    - installed
    - running
    - watch:
      - pkg: nginx
      - file: /etc/nginx/nginx.conf

The configuration is dead simple, but just for clarity I will comment each line to explain how this works.

nginx: ## This is the name of the package and service
  pkg: ## Tells salt this is a package
    - installed ## Tells salt to install this package
  service: ## Tells salt this is also a service
    - running ## Tells salt to ensure the service is running
    - watch: ## Tells salt to watch the following items
      - pkg: nginx ## If the package nginx gets updated, restart the service
      - file: /etc/nginx/nginx.conf ## If the file nginx.conf gets updated, restart the service

With configuration this simple, a Jr. Sysadmin can install nginx on 100 nodes in less than 5 minutes.

Managing the nginx.conf file

Salt can do more than just install a package and make sure a service is running. Salt can also be used to deploy configuration files. Using our nginx example we will also configure salt to deploy our nginx.conf file for us.

The below configuration when added to the init.sls will tell salt to deploy a nginx.conf file to the minion using the /salt/states/base/nginx/nginx.conffile as a template.

Append the following to the same init.sls:

    - managed
    - source: salt://nginx/nginx.conf
    - user: root
    - group: root
    - mode: 644

Again the configuration is dead simple, but let us break this one down as well.

/etc/nginx/nginx.conf: ## Name of the file
  file: ## Tells salt this is a file
    - managed ## Tells salt to mange this file
    - source: salt://nginx/nginx.conf ## Tells salt where it can find a local copy on the master
    - user: root ## Tells salt to ensure the owner of the file is root
    - group: root ## Tells salt to ensure the group of the file is root
    - mode: 644 ## Tells salt to ensure the permissions of the file is 644

After appending the nginx.conf configuration into the Salt State file you can now save and quit the file.

Make sure before continuing that you place your nginx.conf file into /salt/states/base/nginx/ as if Salt cannot find the file than it will not deploy it. It is also worth noting that if the nginx.conf on the minion differs from the nginx.conf on the salt-master than Salt will overwrite the file automatically on its next run. This means that the nginx.conf on the master is now your master copy.

Creating the top.sls file

The top.sls file is the Salt State configuration file, this file will define what States should be in effect on specific minions. The top.sls file by convention is usually in the base environment.

To add our nginx state to our salt-minion we will perform the following steps.

Create the top.sls file

[email protected]:~# vi /salt/states/base/top.sls

Append the following:

    - nginx

The configuration, much like the Salt State files is very simple. Let’s break down the configuration a bit more though.

base: ## Tells salt what environment the following lines are for
  'saltminion*': ## Tells salt to apply the following to any hosts matching a hostname of saltminion*
    - nginx ## Tells salt to apply the nginx state to these hosts

That’s it, we are done configuring salt stack.

Apply The Salt States

Unlike other configuration management tools, by default SaltStack does not automatically deploy the state configurations. Though this can be done, it is not the default.

To apply our nginx configuration run the following command

[email protected]:~# salt '*' state.highstate
 State: - file
 Name: /etc/nginx/nginx.conf
 Function: managed
 Result: True
 Comment: File /etc/nginx/nginx.conf is in the correct state
 State: - pkg
 Name: nginx
 Function: installed
 Result: True
 Comment: The following packages were installed/updated: nginx.
 Changes: nginx-full: { new : 1.1.19-1ubuntu0.2
old : 
 httpd: { new : 1
old : 
 nginx-common: { new : 1.1.19-1ubuntu0.2
old : 
 nginx: { new : 1.1.19-1ubuntu0.2
old : 

 State: - service
 Name: nginx
 Function: running
 Result: True
 Comment: Started Service nginx
 Changes: nginx: True

That’s it, nginx is installed & configured. While this might have seemed like a lot of work for installing nginx, if you expand your salt configuration to php, varnish, mysql client/server, nfs and plenty of other packages and services. At the end of the day SaltStack can save SysAdmin’s valuable time.

Facebook Comments Box
Posted on

VirtualBox Disk resize

VirtualBox Disk resize

CentOS7 VirtualBox, and I finally enlarged my partition /dev/mapper/centos-root – gparted doesn’t work for me because I do not have a desktop on CentOS7 VirtualBox.

Power off your CentOS virtual machine, Go to the directory of your *.vdi image. If you don’t know where it is, look at your Virtualbox Manager GUI VirtualBox -> settings -> storage -> *.vdi -> location e.g. mine is located under ~/VirtualBox VMs/CentOS7/CentOS.vdi Back up your image just in case anything goes wrong

$ cp CentOS7.vdi CentOS7.backup.vdi   

#Resize your virtual storage size, e.g. 200 GB

$ VBoxManage modifyhd CentOS7.vdi --resize 204800 

#Power on your CentOS virtual machine, and check with below command.

 $ sudo fdisk -l

 Device Boot      Start         End      Blocks   Id  System
   /dev/sda1   *        2048     1026047      512000   83  Linux
   /dev/sda2         1026048   209715199   104344576   8e  Linux LVM

Use fdisk utility to delete/create partitions

$ sudo fdisk /dev/sda    #You are in the fdisk utility interactive mode, issue following commands: (mostly just follow the default recommendation)

d - delete a partition

2 - select a partition to delete (/dev/sda2 here)

n - create a new partition

p - make it a primary partition

2 - make it on the same partition number as we deleted

<return> - set the starting block (by default)

<return> - set end ending block (by default)

w - write the partition and leave the fdisk interactive mode Reboot your CentOS machine

$ sudo reboot       
#Resize the physical volume and verify the new size

$ sudo pvresize /dev/sda2

$ sudo pvscan

Take a look at your logical mapping volume to see what volume you want to enlarge, in my case, /dev/mapper/centos-root Resize the file system by adding -r option, it will take care of resizing for you

$ lvextend -r -l +100%FREE /dev/mapper/centos-root

Here you go… You did it…

Facebook Comments Box
Posted on

Percona XtraDB Cluster

Percona XtraDB Cluster

The Cluster consists of Nodes. Recommended configuration is to have at least 3 nodes, but you can make it running with 2 nodes as well. Each Node is regular MySQL / Percona Server setup. The point is that you can convert your existing MySQL / Percona Server into Node and roll Cluster using it as a base. Or otherwise – you can detach Node from Cluster and use it as just a regular server. Each Node contains the full copy of data.

Installation Steps

Debian and Ubuntu packages from Percona are signed with a key. Before using the repository, you should add the key to apt

apt-key adv --keyserver --recv-keys 1C4CBDCDCD2EFD2A

Create a dedicated Percona repository file /etc/apt/sources.list.d/percona.list(trusty)

deb trusty main
apt-get update
apt-get install percona-xtradb-cluster-56 percona-xtradb-cluster-galera-3.x

You should see something like this if installation successful:

* Starting MySQL (Percona XtraDB Cluster) database server mysqld     [ OK]

Now, edit my.cnf file with below template:(node1)

# Disabling symbolic-links is recommended to prevent assorted security risks
# Path to Galera library
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
# In order for Galera to work correctly binlog format should be ROW
# MyISAM storage engine has only experimental support
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
# Node #1 address 
# SST method 
# Cluster name
# Authentication for SST method

Now you can simply bootstrap (start the first node that will initiate the cluster):

/etc/init.d/mysql bootstrap-pxc
service mysql bootstrap-pxc

Check Status with below commands

SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
SHOW GLOBAL STATUS LIKE 'wsrep_cluster_status';
SHOW GLOBAL STATUS LIKE 'wsrep_connected';
SHOW GLOBAL STATUS LIKE 'wsrep_local_state_comment';
Facebook Comments Box
Posted on

Find Public IP

Find Public IP

myip="$(dig +short"
echo "My WAN/Public IP address: ${myip}"


Facebook Comments Box
Posted on

Apache Performance Tweak

Apache Performance Tweak – ubuntu

vim /etc/apache2/mods-available/mpm_prefork.conf
<IfModule mpm_prefork_module>
 StartServers                    50
       MinSpareServers           25
       MaxSpareServers          100
       MaxRequestWorkers         500
       MaxConnectionsPerChild   0
       ServerLimit             500
service apache2 restart
Facebook Comments Box
Posted on

Media parameter in SIP

m=audio 12548 RTP/AVP 0 8 101

It’s a field from SDP protocol, describing parameters of media (“m” is for “media”). Next, the type of media is “audio”, not video, for example. (m=audio). 12548 is a port address for streaming media. “RTP/AVP” means “RTP Audio/Video
Profile” and representing one of RTP profiles, which are coded by 0, 8 and 101. 0 is PCMU 8000 Hz, 8 is PCMA 8000 Hz, and 101 is payload type for DTMF digits sending.

Facebook Comments Box
Posted on

Install OpenSIPs

Install OpenSIPs

OpenSIPS is a multi-functional, multi-purpose signaling SIP server used by carriers, telecoms or ITSPs for solutions like Class4/5 Residential Platforms, Trunking / Wholesale, Enterprise / Virtual PBX Solutions, Session Border Controllers, Application Servers, Front-End Load Balancers, IMS Platforms, Call Centers, and many others…

Platfrom : Ubuntu 14.04 +

Let’s Begin…

apt-get install build-essential openssl bison flex
apt-get install perl libdbi-perl libdbd-mysql-perl libdbd-pg-perl libfrontier-rpc-perl libterm-readline-gnu-perl libberkeleydb-perl ncurses-dev
apt-get install mysql-server libmysqlclient-dev
tar -xvf opensips-2.3.0.tar.gz
cd opensips-2.3.0/
make all
make install 
mkdir /var/run/opensips
cd packaging/debian/
cp opensips.default /etc/default/opensips
cp opensips.init /etc/init.d/opensips
chmod +x /etc/init.d/opensips
useradd opensips
update-rc.d opensips defaults 99
vim /etc/default/opensips
Here you need to replace the 'RUN_OpenSIPS' to 'Yes'. You can also change the user and group and name that you wish to use for the Opensips services and also change the shared memory to minimum 128 which is recommended for the OpenSIPS server.


we also need to update the daemon on OpenSIP and change it location in its startup script and update its state from ‘off’ to ‘on’ and then close the file after making changes.

Facebook Comments Box