kubernetes cluster in VirtualBox(Ubuntu 16.04)

kubernetes cluster in VirtualBox(Ubuntu 16.04)

First Let’s start installing the Docker:

#Remove if you have older version
sudo apt-get remove docker docker-engine docker.ioapt autoremove

sudo apt-get update

#Add Docker’s official GPG key:
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
 
#Verify that you now have the key with the fingerprint
sudo apt-key fingerprint 0EBFCD88

# Add x86_64 / amd64 stable repo
sudo add-apt-repository    "deb [arch=amd64] \ https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
   stable"

sudo apt-get update

#Install Docker-ce now.
sudo apt-get install docker-ce -y

If you are manually adding a key from a PPA, use

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

Add kubernetes deb repository for Ubuntu 16.04

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list 
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

Update and install Docker, kubelet, kubeadm  and kubectl

apt-get update
apt-get install ebtables ethtool docker.io apt-transport-https curl
apt-get install -y kubelet kubeadm kubectl

Starting with Kubernetes v1.8.0, the Kubelet will fail to start up if the nodes have swap memory enabled. Discussion around why swap is not supported can be found in this issue.

Before performing an installation, you must disable swap memory on your nodes. If you want to run with swap memory enabled, you can override the Kubelet configuration in the plan file.

If you are performing an upgrade and you have swap enabled, you will have to decide whether you want to disable swap on all your nodes. If not, you must override the kubelet configuration to allow swap.

Override Kubelet Configuration

If you want to run your cluster nodes with swap memory enabled, you can override the Kubelet configuration in the plan file:

cluster:
  # ...
  kubelet:
    option_overrides:
      fail-swap-on: false

Enable bridge-nf-call tables

vim /etc/ufw/sysctl.conf  
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1

Create the tocken from “kubeadm token create” TOKEN EXPIRE after sometime so ready to create one…

kubeadm join --token 7be225.9524040d34451e07 192.168.1.30:6443 --discovery-token-ca-cert-hash sha256:ade14df7b994c8eb0572677e094d3ba835bec37b33a5c2cadabf6e5e3417a522
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Cluster ready for deployments now… You SSH to master and deploy micro-services…

Kops AWS infra Automation

 

This example Project will help you to create KOPs cluster on multiple AZ but limited to the Single region.

Assume that you have AWS CLI installed and IAM user configured.

The IAM user to create the Kubernetes cluster must have the following permissions:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • IAMFullAccess
  • AmazonVPCFullAccess

Pre-requirements:

  1. Terraform (Note you need to install 0.11.7 Version) https://www.terraform.io/downloads.html
  2. Install kops (WE ARE USING kops 1.8.1 for now) https://github.com/kubernetes/kops

For Mac

brew update && brew install kops

OR from GITHUB

curl -Lo kops https://github.com/kubernetes/kops/releases/download/1.8.1/kops-darwin-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/

For Linux

wget -O kops https://github.com/kubernetes/kops/releases/download/1.8.1/kops-linux-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/
  1. Install kubectl https://kubernetes.io/docs/tasks/tools/install-kubectl/

For Mac

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.11/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

For Ubuntu

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.11/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Getting started

Replace with your public zone name

vim example/variables.tf
variable "domain_name" {
  default = "k8s.thoutam.com"
}

Edit cluster details. node_asg_desired,instance_key_name etc..

vim example/kops_clusters.tf

**** Edit module according to insfra name *****

module "staging" {
  source                    = "../module"
  source                    = "./"
  kubernetes_version        = "1.8.11"
  sg_allow_ssh              = "${aws_security_group.allow_ssh.id}"
  sg_allow_http_s           = "${aws_security_group.allow_http.id}"
  cluster_name              = "staging"
  cluster_fqdn              = "staging.${aws_route53_zone.k8s_zone.name}"
  route53_zone_id           = "${aws_route53_zone.k8s_zone.id}"
  kops_s3_bucket_arn        = "${aws_s3_bucket.kops.arn}"
  kops_s3_bucket_id         = "${aws_s3_bucket.kops.id}"
  vpc_id                    = "${aws_vpc.main_vpc.id}"
  instance_key_name         = "${var.key_name}"
  node_asg_desired          = 3
  node_asg_min              = 3
  node_asg_max              = 3
  master_instance_type      = "t2.medium"
  node_instance_type        = "m4.xlarge"
  internet_gateway_id       = "${aws_internet_gateway.public.id}"
  public_subnet_cidr_blocks = ["${local.staging_public_subnet_cidr_blocks}"]
  kops_dns_mode             = "private"
}

If you want Force single master. (Can be used when a master per AZ is not required or if running in a region with only 2 AZs).

vim module/variables.tf 

**** force_single_master should be true if you want single master ****

variable "force_single_master" {
   default = true
  }

ALl good now. You can run Terraform plan to see if you get any errors. If everything clean just run “terraform apply” to build cluster.

cd example
terrafrom plan

(Output something like below)
  ......
  ......
  
  + module.staging.null_resource.delete_tf_files
      id:                                                 <computed>


Plan: 6 to add, 0 to change, 1 to destroy.

------------------------------------------------------------------------
  
  ......
  ......

MASTER_ELB_CLUSTER1=$(terraform state show module.staging.aws_elb.master | grep dns_name | cut -f2 -d= | xargs)
kubectl config set-cluster staging.k8s.thoutam.com --insecure-skip-tls-verify=true --server=https://$MASTER_ELB_CLUSTER1

And then test:

kubectl cluster-info
Kubernetes master is running at https://staging-master-999999999.eu-west-1.elb.amazonaws.com
KubeDNS is running at https://staging-master-999999999.eu-west-1.elb.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns/proxy

kubectl get nodes
NAME                                          STATUS    ROLES     AGE       VERSION
ip-172-20-25-99.eu-west-1.compute.internal    Ready     master    9m        v1.8.11
ip-172-20-26-11.eu-west-1.compute.internal    Ready     node      3m        v1.8.11
ip-172-20-26-209.eu-west-1.compute.internal   Ready     node      27s       v1.8.11
ip-172-20-27-107.eu-west-1.compute.internal   Ready     node      2m        v1.8.11

Credits: Original code is taken from here.

SaltStack

Setting up the Salt-Master

Salt servers have two types, Master and Minion. The master server is the server that hosts all of the policies and configurations and pushes those to the various minions. The minions, are the infrastructure that you want managed. All of the pushed information is communicated via ZeroMQ; this communication is also encrypted and minions must be authenticated on the master before receiving any commands/configurations.

Installing on Ubuntu

I will be showing you how to install Salt on Ubuntu; however if you want to install Salt on other distributions you can find instructions and a bootstrap script at docs.saltstack.com.

Installing Python Software Properties

Saltstack maintains a PPA (Personal Package Archive) that can be added as an apt repository. On my systems before I could add a PPA Repository I had to install the python-software-properties package.

root@saltmaster:~# apt-get --yes -q install python-software-properties

Adding the SaltStack PPA Repository

root@saltmaster:~# add-apt-repository ppa:saltstack/salt
You are about to add the following PPA to your system:
 Salt, the remote execution and configuration management tool.
 More info: https://launchpad.net/~saltstack/+archive/salt
Press [ENTER] to continue or ctrl-c to cancel adding it

Make sure that you press [ENTER] otherwise the repository will not be added.

Update Apt’s Package Indexes

After adding the repository make sure that you update Apt’s package index.

root@saltmaster:~# apt-get --yes -q update

Install The Salt-Master package

root@saltmaster:~# apt-get --yes -q install salt-master

Configuring The Salt Master

Now that Salt has been installed, we will configure the master server. Unlike many other tools the configuration of SaltStack is pretty simple. This article is going to show a very simple “get you up and running” configuration. I will make sure to cover more advanced configurations in later articles.

In order to configure the salt master we will need to edit the /etc/salt/masterconfiguration file.

root@saltmaster:~# vi /etc/salt/master

Changing the bind interface

Salt is not necessarily push only, the salt minions can also send requests to the salt master. In order to ensure that this happens we will need to tell salt which network interface to listen to.

Find:

# The address of the interface to bind to
#interface: 0.0.0.0

Replace with:

# The address of the interface to bind to
interface: youripaddress

Example:

# The address of the interface to bind to
interface: 192.168.100.102

Setting the states file_roots directory

All of salt’s policies or rather salt “states” need to live somewhere. The file_roots directory is the location on disk for these states. For this article we will place everything into /salt/states/base.

Find:

#file_roots:
#base:
#- /srv/salt

Replace with:

file_roots:
  base:
    - /salt/states/base

Not all states are the same, sometimes you may want a package to be configured one way in development and another in production. While we won’t be covering it yet in this article you can do this by using salt’s “environments” configuration.

Each salt master must have a base environment, this is used to house the top.sls file which defines which salt states apply to specific minions. The base environment is also used in general for states that would apply to all systems.

For example, I love the screen command and want it installed on every machine I manage. To do this I add the screen state into the base environment.

To add additional environments simply append them to the file_rootsconfiguration.

Adding the development environment:

file_roots:
  base:
    - /salt/states/base
  development:
    - /salt/states/dev

Setting the pillar_roots

While this article is not going to cover pillars (I will add more articles for salt don’t worry) I highly suggest configuring the pillar_roots directories as well. I have found that pillars are extremely useful for reusing state configuration and reducing the amount of unique state configurations.

Find:

#pillar_roots:
#base:
#- /srv/pillar

Replace:

pillar_roots:
  base:
    - /salt/pillars/base

Pillars also understand environments, the method to adding additional environments is the same as it was for file_roots.

Restart the salt-master service

That’s all of the editing that we need to perform for a basic salt installation. For the settings to take effect we will need to restart the salt-master service.

root@saltmaster:~# service salt-master restart
 salt-master stop/waiting
 salt-master start/running, process 1036

Creating the salt states and pillars directories

Before we move on to the salt minion’s installation we should create the file_roots and pillar_roots directories that we specified in /etc/salt/master.

root@saltmaster:~# mkdir -p /salt/states/base /salt/pillars/base

Setting up the Salt-Minion

Now that the salt master is setup and configured we will need to install the salt-minion package on all of the systems we want salt to manage for us. Theoretically once these minions have been connected to the salt master, you could get away with never logging into these systems again.

Installing on Ubuntu

The below process can be repeated on as many minions as needed.

Installing Python Software Properties

root@saltminion:~# apt-get --yes -q install python-software-properties

Adding the SaltStack PPA Repository

root@saltminion:~# add-apt-repository ppa:saltstack/salt
You are about to add the following PPA to your system:
 Salt, the remote execution and configuration management tool.
 More info: https://launchpad.net/~saltstack/+archive/salt
Press [ENTER] to continue or ctrl-c to cancel adding it

Make sure that you press [ENTER] otherwise the repository will not be added.

Update Apt’s Package Indexes

After adding the repository make sure that you update Apt’s package index.

root@saltminion:~# apt-get --yes -q update

Install The Salt-Minion package

root@saltminion:~# apt-get --yes -q install salt-minion

Configuring the Salt-Minion

Configuring the salt minion is even easier than the salt master. In simple implementations like the one we are performing today all we need to do is set the salt master IP address.

root@saltminion:~# vi /etc/salt/minion

Changing the Salt-Master target IP

Find:

#master: salt

Replace with:

master: yourmasterip

Example:

master: 192.168.100.102

By default the salt-minion package will try to resolve the “salt” hostname. A simple trick is to set the “salt” hostname to resolve to your salt-master’s IP in the /etc/hosts file and allow the salt-master to push a corrected /etc/salt/minion configuration file. This trick let’s you setup a salt minion server without having to edit the minion configuration file.

Restarting the salt-minion service

In order for the configuration changes to take effect, we must restart the salt-minion service.

root@saltminion:~# service salt-minion restart
salt-minion stop/waiting
salt-minion start/running, process 834

Accepting the Minions key on the Salt-Master

Once the salt-minion service is restarted the minion will start trying to communicate with the master. Before that can happen we must accept the minions key on the master.

On the salt master list the salt-key’s

We can see what keys are pending acceptance by running the salt-key command.

root@saltmaster:~# salt-key -L
**Accepted Keys:**
**Unaccepted Keys:**
saltminion
**Rejected Keys:**

Accept the saltminion’s key

To accept the saltminion’s key we can do this two ways, via the saltminions specific name or accept all pending keys.

Accept by name:
root@saltmaster:~# salt-key -a saltminion
The following keys are going to be accepted:
Unaccepted Keys:
saltminion
Proceed? [n/Y] Y
Key for minion saltminion accepted.

Accept all keys:
root@saltmaster:~# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
saltminion
Proceed? [n/Y] Y
Key for minion saltminion accepted.

Installing and Configuring nginx with SaltStack

While the above information gets you started with Salt, it doesn’t explain how to use Salt to install a package. The below steps will outline how to install a package and deploy configuration files using Salt.

Creating the nginx state

SaltStack has policies just like any other configuration automation tools, however in Salt they are referred to as “states”. You can think of these as the desired states of the items being configured.

Creating the nginx state directory and file

Each state in salt needs a sub-directory in the respective environment. Because we are going to use this state to install and configure nginx I will name our state nginx and I am placing it within our base environment.

root@saltmaster:~# mkdir /salt/states/base/nginx

Once the directory is created we will need to create the “init.sls” file.

root@saltmaster:~# vi /salt/states/base/nginx/init.sls

Specifying the nginx state

Now that we have the Salt State file open, we can start adding the desired state configuration. The Salt State files by default utilize the YAML format. By using YAML these files are very easy to read and easier to write.

Managing the nginx package and service

The following configuration will install the nginx package and ensure the nginx service is running. As well as watch the package nginx and nginx.conf file for updates. If these two items are updated the service nginx will be automatically restarted the next time salt is run against the minions.

Add the following to init.sls:

nginx:
  pkg:
    - installed
  service:
    - running
    - watch:
      - pkg: nginx
      - file: /etc/nginx/nginx.conf

The configuration is dead simple, but just for clarity I will comment each line to explain how this works.

nginx: ## This is the name of the package and service
  pkg: ## Tells salt this is a package
    - installed ## Tells salt to install this package
  service: ## Tells salt this is also a service
    - running ## Tells salt to ensure the service is running
    - watch: ## Tells salt to watch the following items
      - pkg: nginx ## If the package nginx gets updated, restart the service
      - file: /etc/nginx/nginx.conf ## If the file nginx.conf gets updated, restart the service

With configuration this simple, a Jr. Sysadmin can install nginx on 100 nodes in less than 5 minutes.

Managing the nginx.conf file

Salt can do more than just install a package and make sure a service is running. Salt can also be used to deploy configuration files. Using our nginx example we will also configure salt to deploy our nginx.conf file for us.

The below configuration when added to the init.sls will tell salt to deploy a nginx.conf file to the minion using the /salt/states/base/nginx/nginx.conffile as a template.

Append the following to the same init.sls:

/etc/nginx/nginx.conf:
  file:
    - managed
    - source: salt://nginx/nginx.conf
    - user: root
    - group: root
    - mode: 644

Again the configuration is dead simple, but let us break this one down as well.

/etc/nginx/nginx.conf: ## Name of the file
  file: ## Tells salt this is a file
    - managed ## Tells salt to mange this file
    - source: salt://nginx/nginx.conf ## Tells salt where it can find a local copy on the master
    - user: root ## Tells salt to ensure the owner of the file is root
    - group: root ## Tells salt to ensure the group of the file is root
    - mode: 644 ## Tells salt to ensure the permissions of the file is 644

After appending the nginx.conf configuration into the Salt State file you can now save and quit the file.

Make sure before continuing that you place your nginx.conf file into /salt/states/base/nginx/ as if Salt cannot find the file than it will not deploy it. It is also worth noting that if the nginx.conf on the minion differs from the nginx.conf on the salt-master than Salt will overwrite the file automatically on its next run. This means that the nginx.conf on the master is now your master copy.

Creating the top.sls file

The top.sls file is the Salt State configuration file, this file will define what States should be in effect on specific minions. The top.sls file by convention is usually in the base environment.

To add our nginx state to our salt-minion we will perform the following steps.

Create the top.sls file

root@saltmaster:~# vi /salt/states/base/top.sls

Append the following:

base:
  'saltminion*':
    - nginx

The configuration, much like the Salt State files is very simple. Let’s break down the configuration a bit more though.

base: ## Tells salt what environment the following lines are for
  'saltminion*': ## Tells salt to apply the following to any hosts matching a hostname of saltminion*
    - nginx ## Tells salt to apply the nginx state to these hosts

That’s it, we are done configuring salt stack.

Apply The Salt States

Unlike other configuration management tools, by default SaltStack does not automatically deploy the state configurations. Though this can be done, it is not the default.

To apply our nginx configuration run the following command

root@saltmaster:~# salt '*' state.highstate
saltminion:
----------
 State: - file
 Name: /etc/nginx/nginx.conf
 Function: managed
 Result: True
 Comment: File /etc/nginx/nginx.conf is in the correct state
 Changes: 
----------
 State: - pkg
 Name: nginx
 Function: installed
 Result: True
 Comment: The following packages were installed/updated: nginx.
 Changes: nginx-full: { new : 1.1.19-1ubuntu0.2
old : 
}
 httpd: { new : 1
old : 
}
 nginx-common: { new : 1.1.19-1ubuntu0.2
old : 
}
 nginx: { new : 1.1.19-1ubuntu0.2
old : 
}

----------
 State: - service
 Name: nginx
 Function: running
 Result: True
 Comment: Started Service nginx
 Changes: nginx: True

That’s it, nginx is installed & configured. While this might have seemed like a lot of work for installing nginx, if you expand your salt configuration to php, varnish, mysql client/server, nfs and plenty of other packages and services. At the end of the day SaltStack can save SysAdmin’s valuable time.

VirtualBox Disk resize

VirtualBox Disk resize

CentOS7 VirtualBox, and I finally enlarged my partition /dev/mapper/centos-root – gparted doesn’t work for me because I do not have a desktop on CentOS7 VirtualBox.

Power off your CentOS virtual machine, Go to the directory of your *.vdi image. If you don’t know where it is, look at your Virtualbox Manager GUI VirtualBox -> settings -> storage -> *.vdi -> location e.g. mine is located under ~/VirtualBox VMs/CentOS7/CentOS.vdi Back up your image just in case anything goes wrong

$ cp CentOS7.vdi CentOS7.backup.vdi   

#Resize your virtual storage size, e.g. 200 GB

$ VBoxManage modifyhd CentOS7.vdi --resize 204800 

#Power on your CentOS virtual machine, and check with below command.

 
 $ sudo fdisk -l

 Device Boot      Start         End      Blocks   Id  System
   /dev/sda1   *        2048     1026047      512000   83  Linux
   /dev/sda2         1026048   209715199   104344576   8e  Linux LVM

Use fdisk utility to delete/create partitions

$ sudo fdisk /dev/sda    #You are in the fdisk utility interactive mode, issue following commands: (mostly just follow the default recommendation)

d - delete a partition

2 - select a partition to delete (/dev/sda2 here)

n - create a new partition

p - make it a primary partition

2 - make it on the same partition number as we deleted

<return> - set the starting block (by default)

<return> - set end ending block (by default)

w - write the partition and leave the fdisk interactive mode Reboot your CentOS machine

$ sudo reboot       
#Resize the physical volume and verify the new size

$ sudo pvresize /dev/sda2

$ sudo pvscan

Take a look at your logical mapping volume to see what volume you want to enlarge, in my case, /dev/mapper/centos-root Resize the file system by adding -r option, it will take care of resizing for you

$ lvextend -r -l +100%FREE /dev/mapper/centos-root

Here you go… You did it…

Percona XtraDB Cluster

Percona XtraDB Cluster

The Cluster consists of Nodes. Recommended configuration is to have at least 3 nodes, but you can make it running with 2 nodes as well. Each Node is regular MySQL / Percona Server setup. The point is that you can convert your existing MySQL / Percona Server into Node and roll Cluster using it as a base. Or otherwise – you can detach Node from Cluster and use it as just a regular server. Each Node contains the full copy of data.

Installation Steps

Debian and Ubuntu packages from Percona are signed with a key. Before using the repository, you should add the key to apt

apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A

Create a dedicated Percona repository file /etc/apt/sources.list.d/percona.list(trusty)

deb http://repo.percona.com/apt trusty main
apt-get update
apt-get install percona-xtradb-cluster-56 percona-xtradb-cluster-galera-3.x

You should see something like this if installation successful:

* Starting MySQL (Percona XtraDB Cluster) database server mysqld     [ OK]

Now, edit my.cnf file with below template:(node1)

[mysqld]
datadir=/var/lib/mysql
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Path to Galera library
wsrep_provider=/usr/lib/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://10.X.X.1,10.X.X.2,10.X.X.3
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address 
wsrep_node_address=10.X.X.1
# SST method 
wsrep_sst_method=xtrabackup-v2
wsrep_node_name=node3
# Cluster name
wsrep_cluster_name=db_cluster
# Authentication for SST method
wsrep_sst_auth="billinguser:billingpass"
slow_query_log=1
slow_query_log_file=/var/log/mysqld-slow.log
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

Now you can simply bootstrap (start the first node that will initiate the cluster):

/etc/init.d/mysql bootstrap-pxc
      or
service mysql bootstrap-pxc

Check Status with below commands

SHOW GLOBAL STATUS LIKE 'wsrep_%';
SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
SHOW GLOBAL STATUS LIKE 'wsrep_cluster_status';
SHOW GLOBAL STATUS LIKE 'wsrep_ready';
SHOW GLOBAL STATUS LIKE 'wsrep_connected';
SHOW GLOBAL STATUS LIKE 'wsrep_local_state_comment';

Find Public IP

Find Public IP

myip="$(dig +short myip.opendns.com @resolver1.opendns.com)"
echo "My WAN/Public IP address: ${myip}"

More…

curl ifconfig.me
curl icanhazip.com
curl ipecho.net/plain
curl ifconfig.co

Apache Performance Tweak

Apache Performance Tweak – ubuntu

vim /etc/apache2/mods-available/mpm_prefork.conf
<IfModule mpm_prefork_module>
 StartServers                    50
       MinSpareServers           25
       MaxSpareServers          100
       MaxRequestWorkers         500
       MaxConnectionsPerChild   0
       ServerLimit             500
</IfModule>
service apache2 restart

Media parameter in SIP

m=audio 12548 RTP/AVP 0 8 101

It’s a field from SDP protocol, describing parameters of media (“m” is for “media”). Next, the type of media is “audio”, not video, for example. (m=audio). 12548 is a port address for streaming media. “RTP/AVP” means “RTP Audio/Video
Profile” and representing one of RTP profiles, which are coded by 0, 8 and 101. 0 is PCMU 8000 Hz, 8 is PCMA 8000 Hz, and 101 is payload type for DTMF digits sending.

Install OpenSIPs

Install OpenSIPs

OpenSIPS is a multi-functional, multi-purpose signaling SIP server used by carriers, telecoms or ITSPs for solutions like Class4/5 Residential Platforms, Trunking / Wholesale, Enterprise / Virtual PBX Solutions, Session Border Controllers, Application Servers, Front-End Load Balancers, IMS Platforms, Call Centers, and many others…

Platfrom : Ubuntu 14.04 +

Let’s Begin…

apt-get install build-essential openssl bison flex
apt-get install perl libdbi-perl libdbd-mysql-perl libdbd-pg-perl libfrontier-rpc-perl libterm-readline-gnu-perl libberkeleydb-perl ncurses-dev
apt-get install mysql-server libmysqlclient-dev
wget http://opensips.org/pub/opensips/latest/opensips-2.3.0.tar.gz
tar -xvf opensips-2.3.0.tar.gz
cd opensips-2.3.0/
make all
make install 
mkdir /var/run/opensips
cd packaging/debian/
cp opensips.default /etc/default/opensips
cp opensips.init /etc/init.d/opensips
chmod +x /etc/init.d/opensips
useradd opensips
update-rc.d opensips defaults 99
vim /etc/default/opensips
Here you need to replace the 'RUN_OpenSIPS' to 'Yes'. You can also change the user and group and name that you wish to use for the Opensips services and also change the shared memory to minimum 128 which is recommended for the OpenSIPS server.

 

we also need to update the daemon on OpenSIP and change it location in its startup script and update its state from ‘off’ to ‘on’ and then close the file after making changes.