kubernetes Multiline logs for Elasticsearch (Kibana)

If you’re having issues with Kubernetes Multiline logs here is the solution for you.

The Kubernetes autodiscover provider watches for Kubernetes pods to start, update, and stop. These are the available fields on every event:

      • host

     

      • port

     

      • kubernetes.container.id

     

      • kubernetes.container.image

     

      • kubernetes.container.name

     

      • kubernetes.labels

     

      • kubernetes.namespace

     

      • kubernetes.node.name

     

    • kubernetes.pod.name

If the include_annotations config is added to the provider config, then the list of annotations present in the config are added to the event.

By using below Yaml file you can post all K8s logs to ElasticSearch(hosted or in-house).

---

apiVersion: v1

kind: ConfigMap

metadata:

  name: filebeat-config

  namespace: kube-system

  labels:

    k8s-app: filebeat

data:

  filebeat.yml: |-

    filebeat.config:

      inputs:

        # Mounted `filebeat-inputs` configmap:

        path: ${path.config}/inputs.d/*.yml

        # Reload inputs configs as they change:

        reload.enabled: false

      modules:

        path: ${path.config}/modules.d/*.yml

        # Reload module configs as they change:

        reload.enabled: false

    # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:

    #          hints.enabled: true

    filebeat.autodiscover:

      providers:

        - type: kubernetes

          templates:

            - condition:

                or:

                  - equals:

                      kubernetes.namespace: default
              config:

                - type: docker

                  containers.ids:

                    - "${data.kubernetes.container.id}"

                  multiline.pattern: '^[[:space:]]'

                  multiline.negate: false

                  multiline.match: after

    processors:

      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}

    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:

      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9243}']
#use https:// if it is hosted ES

      username: ${ELASTICSEARCH_USERNAME}

      password: ${ELASTICSEARCH_PASSWORD}

---

apiVersion: v1

kind: ConfigMap

metadata:

  name: filebeat-inputs

  namespace: kube-system

  labels:

    k8s-app: filebeat

data:

  kubernetes.yml: |-

    - type: docker

      containers.path: "/var/lib/docker/containers"

      symlinks: true

      containers.ids:

      - "*"

      multiline.pattern: '^DEBUG'

      multiline.negate: true

      multiline.match: 'after'

      processors:

        - add_kubernetes_metadata:

            in_cluster: true

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  name: filebeat

  namespace: kube-system

  labels:

    k8s-app: filebeat

spec:

  template:

    metadata:

      labels:

        k8s-app: filebeat

    spec:

      serviceAccountName: filebeat

      terminationGracePeriodSeconds: 30

      containers:

      - name: filebeat

        image: docker.elastic.co/beats/filebeat:6.4.0

        args: [

          "-c", "/etc/filebeat.yml",

          "-e",

        ]

        env:

        - name: ELASTICSEARCH_HOST

          value: "elasticsearch"    #use https:// if it is hosted ES

        - name: ELASTICSEARCH_PORT

          value: "9243"

        - name: ELASTICSEARCH_USERNAME

          value: "elasticsearch"

        - name: ELASTICSEARCH_PASSWORD

          value: "changeme"

        - name: ELASTIC_CLOUD_ID

          value:

        - name: ELASTIC_CLOUD_AUTH

          value:

        securityContext:

          runAsUser: 0

        resources:

          limits:

            memory: 200Mi

          requests:

            cpu: 100m

            memory: 100Mi

        volumeMounts:

        - name: config

          mountPath: /etc/filebeat.yml

          readOnly: true

          subPath: filebeat.yml

        - name: inputs

          mountPath: /usr/share/filebeat/inputs.d

          readOnly: true

        - name: data

          mountPath: /usr/share/filebeat/data

        - name: varlogpods

          mountPath: /var/log/pods

          readOnly: true

        - name: varlogcontainers

          mountPath: /var/log/containers

        - name: varlibdockercontainers

          mountPath: /var/lib/docker/containers

          readOnly: true

      volumes:

      - name: config

        configMap:

          defaultMode: 0600

          name: filebeat-config

      - name: varlogpods

        hostPath:

          path: /var/log/pods

      - name: varlogcontainers

        hostPath:

          path: /var/log/containers

      - name: varlibdockercontainers

        hostPath:

          path: /var/lib/docker/containers

      - name: inputs

        configMap:

          defaultMode: 0600

          name: filebeat-inputs

      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart

      - name: data

        hostPath:

          path: /var/lib/filebeat-data

          type: DirectoryOrCreate

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

  name: filebeat

subjects:

- kind: ServiceAccount

  name: filebeat

  namespace: kube-system

roleRef:

  kind: ClusterRole

  name: filebeat

  apiGroup: rbac.authorization.k8s.io

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRole

metadata:

  name: filebeat

  labels:

    k8s-app: filebeat

rules:

- apiGroups: [""] # "" indicates the core API group

  resources:

  - namespaces

  - pods

  verbs:

  - get

  - watch

  - list

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: filebeat

  namespace: kube-system

  labels:

    k8s-app: filebeat

---
Facebook Comments

GitLab setup for Kubernetes

GitLab is the first single application built from the ground up for all stages of the DevOps lifecycle for Product, Development, QA, Security, and Operations teams to work concurrently on the same project. GitLab enables teams to collaborate and work from a single conversation, instead of managing multiple threads across different tools. GitLab provides teams with a single data store, one user interface, and one permission model across the DevOps lifecycle allowing teams to collaborate, significantly reducing cycle time and focus exclusively on building great software quickly.

Here is a quick Tutorial that will teach you how to deploy Gitlab within Kubernetes Environment.

Requirements:

1. Working Kubernetes Cluster

2. Storage Class: We will use it for stateful deployment.

Here Git repository that I used to build Gitlab.

#Clone gitlab_k8s repo

git clone https://github.com/jaganthoutam/gitlab_k8s.git

cd gitlab_k8s

gitlab-pvc.yaml Contains the pv,pvc voulmes for postgres,gitlab,redis..

Here I am using my NFS(nfs01.thoutam.loc) server…

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-gitlab
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-gitlab
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: nfs01.thoutam.loc
    # Exported path of your NFS server
    path: "/mnt/gitlab"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-gitlab-post
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-gitlab-post
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: nfs01.thoutam.loc
    # Exported path of your NFS server
    path: "/mnt/gitlab-post"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-gitlab-redis
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-gitlab-redis
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: nfs01.thoutam.loc
    # Exported path of your NFS server
    path: "/mnt/gitlab-redis"
---

You can use other storage classes based on your cloud Providers.

gitlab-rc.yml Contains Gitlab Deployment config:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gitlab
spec:
  replicas: 1
#  selector:
#    name: gitlab
  template:
    metadata:
      name: gitlab
      labels:
        name: gitlab
    spec:
      containers:
      - name: gitlab
        image: jaganthoutam/gitlab:11.1.4
        env:
        - name: TZ
          value: Asia/Kolkata
        - name: GITLAB_TIMEZONE
          value: Kolkata

        - name: GITLAB_SECRETS_DB_KEY_BASE
          value: long-and-random-alpha-numeric-string  #CHANGE ME
        - name: GITLAB_SECRETS_SECRET_KEY_BASE
          value: long-and-random-alpha-numeric-string #CHANGE ME
        - name: GITLAB_SECRETS_OTP_KEY_BASE
          value: long-and-random-alpha-numeric-string #CHANGE ME


        - name: GITLAB_ROOT_PASSWORD
          value: password               #CHANGE ME
        - name: GITLAB_ROOT_EMAIL
          value: [email protected]        #CHANGE ME

        - name: GITLAB_HOST
          value: gitlab.lb.thoutam.loc  #CHANGE ME
        - name: GITLAB_PORT
          value: "80"
        - name: GITLAB_SSH_PORT
          value: "22"

        - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
          value: "true"
        - name: GITLAB_NOTIFY_PUSHER
          value: "false"

        - name: GITLAB_BACKUP_SCHEDULE
          value: daily
        - name: GITLAB_BACKUP_TIME
          value: 01:00

        - name: DB_TYPE
          value: postgres
        - name: DB_HOST
          value: postgresql
        - name: DB_PORT
          value: "5432"
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production

        - name: REDIS_HOST
          value: redis
        - name: REDIS_PORT
          value: "6379"

        - name: SMTP_ENABLED
          value: "false"
        - name: SMTP_DOMAIN
          value: www.example.com
        - name: SMTP_HOST
          value: smtp.gmail.com
        - name: SMTP_PORT
          value: "587"
        - name: SMTP_USER
          value: [email protected]
        - name: SMTP_PASS
          value: password
        - name: SMTP_STARTTLS
          value: "true"
        - name: SMTP_AUTHENTICATION
          value: login

        - name: IMAP_ENABLED
          value: "false"
        - name: IMAP_HOST
          value: imap.gmail.com
        - name: IMAP_PORT
          value: "993"
        - name: IMAP_USER
          value: [email protected]
        - name: IMAP_PASS
          value: password
        - name: IMAP_SSL
          value: "true"
        - name: IMAP_STARTTLS
          value: "false"
        ports:
        - name: http
          containerPort: 80
        - name: ssh
          containerPort: 22
        volumeMounts:
        - mountPath: /home/git/data
          name: data
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 180
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        persistentVolumeClaim:
#NFS PVC name identifier
          claimName: nfs-gitlab

gitlab-svc.yml contains the gitlab services. I used 80,20 default ports.

apiVersion: v1
kind: Service
metadata:
  name: gitlab
  labels:
    name: gitlab
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: ssh
      port: 22
      targetPort: ssh
  selector:
    name: gitlab

postgresql-rc.yml   Contains Postgres ReplicationController config :

apiVersion: v1
kind: ReplicationController
metadata:
  name: postgresql
spec:
  replicas: 1
  selector:
    name: postgresql
  template:
    metadata:
      name: postgresql
      labels:
        name: postgresql
    spec:
      containers:
      - name: postgresql
        image: jaganthoutam/postgresql:10
        env:
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: DB_EXTENSION
          value: pg_trgm
        ports:
        - name: postgres
          containerPort: 5432
        volumeMounts:
        - mountPath: /var/lib/postgresql
          name: data
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        persistentVolumeClaim:
# NFS PVC identifier
          claimName: nfs-gitlab-post

postgresql-svc.yml contains postgres service config :

apiVersion: v1
kind: Service
metadata:
  name: postgresql
  labels:
    name: postgresql
spec:
  ports:
    - name: postgres
      port: 5432
      targetPort: postgres
  selector:
    name: postgresql

redis-rc.yml contains redis ReplicationController config:

apiVersion: v1
kind: ReplicationController
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    name: redis
  template:
    metadata:
      name: redis
      labels:
        name: redis
    spec:
      containers:
      - name: redis
        image: jaganthoutam/redis
        ports:
        - name: redis
          containerPort: 6379
        volumeMounts:
        - mountPath: /var/lib/redis
          name: data
        livenessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        persistentVolumeClaim:
#NFS PVC identifier
          claimName: nfs-gitlab-redis

redis-svc.yml contain redis service config:

apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    name: redis
spec:
  ports:
    - name: redis
      port: 6379
      targetPort: redis
  selector:
    name: redis

Change the configuration according to your needs and apply using kubectl.

kubectly apply -f .


#and check if your pods are running or not.

[email protected]:~# kubectl get po
NAME                        READY     STATUS    RESTARTS   AGE
gitlab-589cb45ff4-hch2g     1/1       Running   1          1d
postgres-55f6bcbb99-4x48g   1/1       Running   3          1d
postgresql-v2svn            1/1       Running   4          1d
redis-7r486                 1/1       Running   2          1d

Let me know if this helps you.

Facebook Comments

Jenkins (copy_reference_file.log Permission denied) issue in K8s

when you Trying to run the default Jenkins image (or jenkinsci/jenkins) with a persistent volume mounted (nfs) to /var/jenkins_home will currently fail:

[email protected]:/etc/kubernetes# kubectl logs jenkins-7786b4f8b6-cqf6q

touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied

Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?

Above error can be fixed by adding securityContext in Jenkins Deployment.

apiVersion: v1
kind: Service
metadata:
  name: jenkins
spec:
  type: NodePort
  ports:
    - port: 8080
      targetPort: 8080
  selector:
    app: jenkins
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      securityContext:
        fsGroup: 1000 
        runAsUser: 0
      containers:
        - name: jenkins
          image: jaganthoutam/jenkins:2.0
          ports:
            - name: httpport
              containerPort: 8080
            - name: jnlpport
              containerPort: 50000
          volumeMounts:
            - name: nfs-jenkins
              mountPath: "/var/jenkins_home"
      volumes:
      - name: nfs-jenkins
        persistentVolumeClaim:
          claimName: nfs-jenkins
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-jenkins
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-jenkins
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: nfs01.thoutam.loc
    # Exported path of your NFS server
    path: "/mnt/jenkins"
Facebook Comments

Some useful git alias

Here are useful git alias commands…

g=git

ga='git add'

gaa='git add --all'

gap='git apply'

gapa='git add --patch'

gau='git add --update'

gb='git branch'

gba='git branch -a'

gbd='git branch -d'

gbda='git branch --no-color --merged | command grep -vE "^(\*|\s*(master|develop|dev)\s*$)" | command xargs -n 1 git branch -d'

gbl='git blame -b -w'

gbnm='git branch --no-merged'

gbr='git branch --remote'

gbs='git bisect'

gbsb='git bisect bad'

gbsg='git bisect good'

gbsr='git bisect reset'

gbss='git bisect start'

gc='git commit -v'

'gc!'='git commit -v --amend'

gca='git commit -v -a'

'gca!'='git commit -v -a --amend'

gcam='git commit -a -m'

'gcan!'='git commit -v -a --no-edit --amend'

'gcans!'='git commit -v -a -s --no-edit --amend'

gcb='git checkout -b'

gcd='git checkout develop'

gcf='git config --list'

gcl='git clone --recursive'

gclean='git clean -fd'

gcm='git checkout master'

gcmsg='git commit -m'

'gcn!'='git commit -v --no-edit --amend'

gco='git checkout'

gcount='git shortlog -sn'

gcp='git cherry-pick'

gcpa='git cherry-pick --abort'

gcpc='git cherry-pick --continue'

gcs='git commit -S'

gcsm='git commit -s -m'

gd='git diff'

gdca='git diff --cached'

gdct='git describe --tags `git rev-list --tags --max-count=1`'

gdcw='git diff --cached --word-diff'

gdt='git diff-tree --no-commit-id --name-only -r'

gdw='git diff --word-diff'

gf='git fetch'

gfa='git fetch --all --prune'

gfo='git fetch origin'

gg='git gui citool'

gga='git gui citool --amend'

ggpull='git pull origin $(git_current_branch)'

ggpush='git push origin $(git_current_branch)'

ggsup='git branch --set-upstream-to=origin/$(git_current_branch)'

ghh='git help'

gignore='git update-index --assume-unchanged'

gignored='git ls-files -v | grep "^[[:lower:]]"'

git-svn-dcommit-push='git svn dcommit && git push github master:svntrunk'

gk='\gitk --all --branches'

gke='\gitk --all $(git log -g --pretty=%h)'

gl='git pull'

glg='git log --stat'

glgg='git log --graph'

glgga='git log --graph --decorate --all'

glgm='git log --graph --max-count=10'

glgp='git log --stat -p'

glo='git log --oneline --decorate'

glod='git log --graph --pretty='\''%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%ad) %C(bold blue)<%an>%Creset'\'

glods='git log --graph --pretty='\''%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%ad) %C(bold blue)<%an>%Creset'\'' --date=short'

glog='git log --oneline --decorate --graph'

gloga='git log --oneline --decorate --graph --all'

glol='git log --graph --pretty='\''%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset'\'

glola='git log --graph --pretty='\''%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset'\'' --all'

glp=_git_log_prettily

glum='git pull upstream master'

gm='git merge'

gma='git merge --abort'

gmom='git merge origin/master'

gmt='git mergetool --no-prompt'

gmtvim='git mergetool --no-prompt --tool=vimdiff'

gmum='git merge upstream/master'

gp='git push'

gpd='git push --dry-run'

gpoat='git push origin --all && git push origin --tags'

gpristine='git reset --hard && git clean -dfx'

gpsup='git push --set-upstream origin $(git_current_branch)'

gpu='git push upstream'

gpv='git push -v'

gr='git remote'

gra='git remote add'

grb='git rebase'

grba='git rebase --abort'

grbc='git rebase --continue'

grbd='git rebase develop'

grbi='git rebase -i'

grbm='git rebase master'

grbs='git rebase --skip'

grep='grep  --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}'

grh='git reset'

grhh='git reset --hard'

grmv='git remote rename'

grrm='git remote remove'

grset='git remote set-url'

grt='cd $(git rev-parse --show-toplevel || echo ".")'

gru='git reset --'

grup='git remote update'

grv='git remote -v'

gsb='git status -sb'

gsd='git svn dcommit'

gsi='git submodule init'

gsps='git show --pretty=short --show-signature'

gsr='git svn rebase'

gss='git status -s'

gst='git status'

gsta='git stash save'

gstaa='git stash apply'

gstc='git stash clear'

gstd='git stash drop'

gstl='git stash list'

gstp='git stash pop'

gsts='git stash show --text'

gsu='git submodule update'

gts='git tag -s'

gtv='git tag | sort -V'

gunignore='git update-index --no-assume-unchanged'

gunwip='git log -n 1 | grep -q -c "\-\-wip\-\-" && git reset HEAD~1'

gup='git pull --rebase'

gupv='git pull --rebase -v'

gwch='git whatchanged -p --abbrev-commit --pretty=medium'

gwip='git add -A; git rm $(git ls-files --deleted) 2> /dev/null; git commit --no-verify -m "--wip-- [skip ci]"'

 

Facebook Comments

kubernetes cluster in VirtualBox(Ubuntu 16.04)

kubernetes cluster in VirtualBox(Ubuntu 16.04)

First Let’s start installing the Docker:

#Remove if you have older version
sudo apt-get remove docker docker-engine docker.ioapt autoremove

sudo apt-get update

#Add Docker’s official GPG key:
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
 
#Verify that you now have the key with the fingerprint
sudo apt-key fingerprint 0EBFCD88

# Add x86_64 / amd64 stable repo
sudo add-apt-repository    "deb [arch=amd64] \ https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
   stable"

sudo apt-get update

#Install Docker-ce now.
sudo apt-get install docker-ce -y

If you are manually adding a key from a PPA, use

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

Add kubernetes deb repository for Ubuntu 16.04

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list 
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

Update and install Docker, kubelet, kubeadm  and kubectl

apt-get update
apt-get install ebtables ethtool docker.io apt-transport-https curl
apt-get install -y kubelet kubeadm kubectl

Starting with Kubernetes v1.8.0, the Kubelet will fail to start up if the nodes have swap memory enabled. Discussion around why swap is not supported can be found in this issue.

Before performing an installation, you must disable swap memory on your nodes. If you want to run with swap memory enabled, you can override the Kubelet configuration in the plan file.

If you are performing an upgrade and you have swap enabled, you will have to decide whether you want to disable swap on all your nodes. If not, you must override the kubelet configuration to allow swap.

Override Kubelet Configuration

If you want to run your cluster nodes with swap memory enabled, you can override the Kubelet configuration in the plan file:

cluster:
  # ...
  kubelet:
    option_overrides:
      fail-swap-on: false

Enable bridge-nf-call tables

vim /etc/ufw/sysctl.conf  
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1

Create the tocken from “kubeadm token create” TOKEN EXPIRE after sometime so ready to create one…

kubeadm join --token 7be225.9524040d34451e07 192.168.1.30:6443 --discovery-token-ca-cert-hash sha256:ade14df7b994c8eb0572677e094d3ba835bec37b33a5c2cadabf6e5e3417a522
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Cluster ready for deployments now… You SSH to master and deploy micro-services…

Facebook Comments

Kops AWS infra Automation

 

This example Project will help you to create KOPs cluster on multiple AZ but limited to the Single region.

Assume that you have AWS CLI installed and IAM user configured.

The IAM user to create the Kubernetes cluster must have the following permissions:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • IAMFullAccess
  • AmazonVPCFullAccess

Pre-requirements:

  1. Terraform (Note you need to install 0.11.7 Version) https://www.terraform.io/downloads.html
  2. Install kops (WE ARE USING kops 1.8.1 for now) https://github.com/kubernetes/kops

For Mac

brew update && brew install kops

OR from GITHUB

curl -Lo kops https://github.com/kubernetes/kops/releases/download/1.8.1/kops-darwin-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/

For Linux

wget -O kops https://github.com/kubernetes/kops/releases/download/1.8.1/kops-linux-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/
  1. Install kubectl https://kubernetes.io/docs/tasks/tools/install-kubectl/

For Mac

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.11/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

For Ubuntu

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.11/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Getting started

Replace with your public zone name

vim example/variables.tf
variable "domain_name" {
  default = "k8s.thoutam.com"
}

Edit cluster details. node_asg_desired,instance_key_name etc..

vim example/kops_clusters.tf

**** Edit module according to insfra name *****

module "staging" {
  source                    = "../module"
  source                    = "./"
  kubernetes_version        = "1.8.11"
  sg_allow_ssh              = "${aws_security_group.allow_ssh.id}"
  sg_allow_http_s           = "${aws_security_group.allow_http.id}"
  cluster_name              = "staging"
  cluster_fqdn              = "staging.${aws_route53_zone.k8s_zone.name}"
  route53_zone_id           = "${aws_route53_zone.k8s_zone.id}"
  kops_s3_bucket_arn        = "${aws_s3_bucket.kops.arn}"
  kops_s3_bucket_id         = "${aws_s3_bucket.kops.id}"
  vpc_id                    = "${aws_vpc.main_vpc.id}"
  instance_key_name         = "${var.key_name}"
  node_asg_desired          = 3
  node_asg_min              = 3
  node_asg_max              = 3
  master_instance_type      = "t2.medium"
  node_instance_type        = "m4.xlarge"
  internet_gateway_id       = "${aws_internet_gateway.public.id}"
  public_subnet_cidr_blocks = ["${local.staging_public_subnet_cidr_blocks}"]
  kops_dns_mode             = "private"
}

If you want Force single master. (Can be used when a master per AZ is not required or if running in a region with only 2 AZs).

vim module/variables.tf 

**** force_single_master should be true if you want single master ****

variable "force_single_master" {
   default = true
  }

ALl good now. You can run Terraform plan to see if you get any errors. If everything clean just run “terraform apply” to build cluster.

cd example
terrafrom plan

(Output something like below)
  ......
  ......
  
  + module.staging.null_resource.delete_tf_files
      id:                                                 <computed>


Plan: 6 to add, 0 to change, 1 to destroy.

------------------------------------------------------------------------
  
  ......
  ......

MASTER_ELB_CLUSTER1=$(terraform state show module.staging.aws_elb.master | grep dns_name | cut -f2 -d= | xargs)
kubectl config set-cluster staging.k8s.thoutam.com --insecure-skip-tls-verify=true --server=https://$MASTER_ELB_CLUSTER1

And then test:

kubectl cluster-info
Kubernetes master is running at https://staging-master-999999999.eu-west-1.elb.amazonaws.com
KubeDNS is running at https://staging-master-999999999.eu-west-1.elb.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns/proxy

kubectl get nodes
NAME                                          STATUS    ROLES     AGE       VERSION
ip-172-20-25-99.eu-west-1.compute.internal    Ready     master    9m        v1.8.11
ip-172-20-26-11.eu-west-1.compute.internal    Ready     node      3m        v1.8.11
ip-172-20-26-209.eu-west-1.compute.internal   Ready     node      27s       v1.8.11
ip-172-20-27-107.eu-west-1.compute.internal   Ready     node      2m        v1.8.11

Credits: Original code is taken from here.

Facebook Comments

SaltStack

Setting up the Salt-Master

Salt servers have two types, Master and Minion. The master server is the server that hosts all of the policies and configurations and pushes those to the various minions. The minions, are the infrastructure that you want managed. All of the pushed information is communicated via ZeroMQ; this communication is also encrypted and minions must be authenticated on the master before receiving any commands/configurations.

Installing on Ubuntu

I will be showing you how to install Salt on Ubuntu; however if you want to install Salt on other distributions you can find instructions and a bootstrap script at docs.saltstack.com.

Installing Python Software Properties

Saltstack maintains a PPA (Personal Package Archive) that can be added as an apt repository. On my systems before I could add a PPA Repository I had to install the python-software-properties package.

[email protected]:~# apt-get --yes -q install python-software-properties

Adding the SaltStack PPA Repository

[email protected]:~# add-apt-repository ppa:saltstack/salt
You are about to add the following PPA to your system:
 Salt, the remote execution and configuration management tool.
 More info: https://launchpad.net/~saltstack/+archive/salt
Press [ENTER] to continue or ctrl-c to cancel adding it

Make sure that you press [ENTER] otherwise the repository will not be added.

Update Apt’s Package Indexes

After adding the repository make sure that you update Apt’s package index.

[email protected]:~# apt-get --yes -q update

Install The Salt-Master package

[email protected]:~# apt-get --yes -q install salt-master

Configuring The Salt Master

Now that Salt has been installed, we will configure the master server. Unlike many other tools the configuration of SaltStack is pretty simple. This article is going to show a very simple “get you up and running” configuration. I will make sure to cover more advanced configurations in later articles.

In order to configure the salt master we will need to edit the /etc/salt/masterconfiguration file.

[email protected]:~# vi /etc/salt/master

Changing the bind interface

Salt is not necessarily push only, the salt minions can also send requests to the salt master. In order to ensure that this happens we will need to tell salt which network interface to listen to.

Find:

# The address of the interface to bind to
#interface: 0.0.0.0

Replace with:

# The address of the interface to bind to
interface: youripaddress

Example:

# The address of the interface to bind to
interface: 192.168.100.102

Setting the states file_roots directory

All of salt’s policies or rather salt “states” need to live somewhere. The file_roots directory is the location on disk for these states. For this article we will place everything into /salt/states/base.

Find:

#file_roots:
#base:
#- /srv/salt

Replace with:

file_roots:
  base:
    - /salt/states/base

Not all states are the same, sometimes you may want a package to be configured one way in development and another in production. While we won’t be covering it yet in this article you can do this by using salt’s “environments” configuration.

Each salt master must have a base environment, this is used to house the top.sls file which defines which salt states apply to specific minions. The base environment is also used in general for states that would apply to all systems.

For example, I love the screen command and want it installed on every machine I manage. To do this I add the screen state into the base environment.

To add additional environments simply append them to the file_rootsconfiguration.

Adding the development environment:

file_roots:
  base:
    - /salt/states/base
  development:
    - /salt/states/dev

Setting the pillar_roots

While this article is not going to cover pillars (I will add more articles for salt don’t worry) I highly suggest configuring the pillar_roots directories as well. I have found that pillars are extremely useful for reusing state configuration and reducing the amount of unique state configurations.

Find:

#pillar_roots:
#base:
#- /srv/pillar

Replace:

pillar_roots:
  base:
    - /salt/pillars/base

Pillars also understand environments, the method to adding additional environments is the same as it was for file_roots.

Restart the salt-master service

That’s all of the editing that we need to perform for a basic salt installation. For the settings to take effect we will need to restart the salt-master service.

[email protected]:~# service salt-master restart
 salt-master stop/waiting
 salt-master start/running, process 1036

Creating the salt states and pillars directories

Before we move on to the salt minion’s installation we should create the file_roots and pillar_roots directories that we specified in /etc/salt/master.

[email protected]:~# mkdir -p /salt/states/base /salt/pillars/base

Setting up the Salt-Minion

Now that the salt master is setup and configured we will need to install the salt-minion package on all of the systems we want salt to manage for us. Theoretically once these minions have been connected to the salt master, you could get away with never logging into these systems again.

Installing on Ubuntu

The below process can be repeated on as many minions as needed.

Installing Python Software Properties

[email protected]:~# apt-get --yes -q install python-software-properties

Adding the SaltStack PPA Repository

[email protected]:~# add-apt-repository ppa:saltstack/salt
You are about to add the following PPA to your system:
 Salt, the remote execution and configuration management tool.
 More info: https://launchpad.net/~saltstack/+archive/salt
Press [ENTER] to continue or ctrl-c to cancel adding it

Make sure that you press [ENTER] otherwise the repository will not be added.

Update Apt’s Package Indexes

After adding the repository make sure that you update Apt’s package index.

[email protected]:~# apt-get --yes -q update

Install The Salt-Minion package

[email protected]:~# apt-get --yes -q install salt-minion

Configuring the Salt-Minion

Configuring the salt minion is even easier than the salt master. In simple implementations like the one we are performing today all we need to do is set the salt master IP address.

[email protected]:~# vi /etc/salt/minion

Changing the Salt-Master target IP

Find:

#master: salt

Replace with:

master: yourmasterip

Example:

master: 192.168.100.102

By default the salt-minion package will try to resolve the “salt” hostname. A simple trick is to set the “salt” hostname to resolve to your salt-master’s IP in the /etc/hosts file and allow the salt-master to push a corrected /etc/salt/minion configuration file. This trick let’s you setup a salt minion server without having to edit the minion configuration file.

Restarting the salt-minion service

In order for the configuration changes to take effect, we must restart the salt-minion service.

[email protected]:~# service salt-minion restart
salt-minion stop/waiting
salt-minion start/running, process 834

Accepting the Minions key on the Salt-Master

Once the salt-minion service is restarted the minion will start trying to communicate with the master. Before that can happen we must accept the minions key on the master.

On the salt master list the salt-key’s

We can see what keys are pending acceptance by running the salt-key command.

[email protected]:~# salt-key -L
**Accepted Keys:**
**Unaccepted Keys:**
saltminion
**Rejected Keys:**

Accept the saltminion’s key

To accept the saltminion’s key we can do this two ways, via the saltminions specific name or accept all pending keys.

Accept by name:
[email protected]:~# salt-key -a saltminion
The following keys are going to be accepted:
Unaccepted Keys:
saltminion
Proceed? [n/Y] Y
Key for minion saltminion accepted.

Accept all keys:
[email protected]:~# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
saltminion
Proceed? [n/Y] Y
Key for minion saltminion accepted.

Installing and Configuring nginx with SaltStack

While the above information gets you started with Salt, it doesn’t explain how to use Salt to install a package. The below steps will outline how to install a package and deploy configuration files using Salt.

Creating the nginx state

SaltStack has policies just like any other configuration automation tools, however in Salt they are referred to as “states”. You can think of these as the desired states of the items being configured.

Creating the nginx state directory and file

Each state in salt needs a sub-directory in the respective environment. Because we are going to use this state to install and configure nginx I will name our state nginx and I am placing it within our base environment.

[email protected]:~# mkdir /salt/states/base/nginx

Once the directory is created we will need to create the “init.sls” file.

[email protected]:~# vi /salt/states/base/nginx/init.sls

Specifying the nginx state

Now that we have the Salt State file open, we can start adding the desired state configuration. The Salt State files by default utilize the YAML format. By using YAML these files are very easy to read and easier to write.

Managing the nginx package and service

The following configuration will install the nginx package and ensure the nginx service is running. As well as watch the package nginx and nginx.conf file for updates. If these two items are updated the service nginx will be automatically restarted the next time salt is run against the minions.

Add the following to init.sls:

nginx:
  pkg:
    - installed
  service:
    - running
    - watch:
      - pkg: nginx
      - file: /etc/nginx/nginx.conf

The configuration is dead simple, but just for clarity I will comment each line to explain how this works.

nginx: ## This is the name of the package and service
  pkg: ## Tells salt this is a package
    - installed ## Tells salt to install this package
  service: ## Tells salt this is also a service
    - running ## Tells salt to ensure the service is running
    - watch: ## Tells salt to watch the following items
      - pkg: nginx ## If the package nginx gets updated, restart the service
      - file: /etc/nginx/nginx.conf ## If the file nginx.conf gets updated, restart the service

With configuration this simple, a Jr. Sysadmin can install nginx on 100 nodes in less than 5 minutes.

Managing the nginx.conf file

Salt can do more than just install a package and make sure a service is running. Salt can also be used to deploy configuration files. Using our nginx example we will also configure salt to deploy our nginx.conf file for us.

The below configuration when added to the init.sls will tell salt to deploy a nginx.conf file to the minion using the /salt/states/base/nginx/nginx.conffile as a template.

Append the following to the same init.sls:

/etc/nginx/nginx.conf:
  file:
    - managed
    - source: salt://nginx/nginx.conf
    - user: root
    - group: root
    - mode: 644

Again the configuration is dead simple, but let us break this one down as well.

/etc/nginx/nginx.conf: ## Name of the file
  file: ## Tells salt this is a file
    - managed ## Tells salt to mange this file
    - source: salt://nginx/nginx.conf ## Tells salt where it can find a local copy on the master
    - user: root ## Tells salt to ensure the owner of the file is root
    - group: root ## Tells salt to ensure the group of the file is root
    - mode: 644 ## Tells salt to ensure the permissions of the file is 644

After appending the nginx.conf configuration into the Salt State file you can now save and quit the file.

Make sure before continuing that you place your nginx.conf file into /salt/states/base/nginx/ as if Salt cannot find the file than it will not deploy it. It is also worth noting that if the nginx.conf on the minion differs from the nginx.conf on the salt-master than Salt will overwrite the file automatically on its next run. This means that the nginx.conf on the master is now your master copy.

Creating the top.sls file

The top.sls file is the Salt State configuration file, this file will define what States should be in effect on specific minions. The top.sls file by convention is usually in the base environment.

To add our nginx state to our salt-minion we will perform the following steps.

Create the top.sls file

[email protected]:~# vi /salt/states/base/top.sls

Append the following:

base:
  'saltminion*':
    - nginx

The configuration, much like the Salt State files is very simple. Let’s break down the configuration a bit more though.

base: ## Tells salt what environment the following lines are for
  'saltminion*': ## Tells salt to apply the following to any hosts matching a hostname of saltminion*
    - nginx ## Tells salt to apply the nginx state to these hosts

That’s it, we are done configuring salt stack.

Apply The Salt States

Unlike other configuration management tools, by default SaltStack does not automatically deploy the state configurations. Though this can be done, it is not the default.

To apply our nginx configuration run the following command

[email protected]:~# salt '*' state.highstate
saltminion:
----------
 State: - file
 Name: /etc/nginx/nginx.conf
 Function: managed
 Result: True
 Comment: File /etc/nginx/nginx.conf is in the correct state
 Changes: 
----------
 State: - pkg
 Name: nginx
 Function: installed
 Result: True
 Comment: The following packages were installed/updated: nginx.
 Changes: nginx-full: { new : 1.1.19-1ubuntu0.2
old : 
}
 httpd: { new : 1
old : 
}
 nginx-common: { new : 1.1.19-1ubuntu0.2
old : 
}
 nginx: { new : 1.1.19-1ubuntu0.2
old : 
}

----------
 State: - service
 Name: nginx
 Function: running
 Result: True
 Comment: Started Service nginx
 Changes: nginx: True

That’s it, nginx is installed & configured. While this might have seemed like a lot of work for installing nginx, if you expand your salt configuration to php, varnish, mysql client/server, nfs and plenty of other packages and services. At the end of the day SaltStack can save SysAdmin’s valuable time.

Facebook Comments

VirtualBox Disk resize

VirtualBox Disk resize

CentOS7 VirtualBox, and I finally enlarged my partition /dev/mapper/centos-root – gparted doesn’t work for me because I do not have a desktop on CentOS7 VirtualBox.

Power off your CentOS virtual machine, Go to the directory of your *.vdi image. If you don’t know where it is, look at your Virtualbox Manager GUI VirtualBox -> settings -> storage -> *.vdi -> location e.g. mine is located under ~/VirtualBox VMs/CentOS7/CentOS.vdi Back up your image just in case anything goes wrong

$ cp CentOS7.vdi CentOS7.backup.vdi   

#Resize your virtual storage size, e.g. 200 GB

$ VBoxManage modifyhd CentOS7.vdi --resize 204800 

#Power on your CentOS virtual machine, and check with below command.

 
 $ sudo fdisk -l

 Device Boot      Start         End      Blocks   Id  System
   /dev/sda1   *        2048     1026047      512000   83  Linux
   /dev/sda2         1026048   209715199   104344576   8e  Linux LVM

Use fdisk utility to delete/create partitions

$ sudo fdisk /dev/sda    #You are in the fdisk utility interactive mode, issue following commands: (mostly just follow the default recommendation)

d - delete a partition

2 - select a partition to delete (/dev/sda2 here)

n - create a new partition

p - make it a primary partition

2 - make it on the same partition number as we deleted

<return> - set the starting block (by default)

<return> - set end ending block (by default)

w - write the partition and leave the fdisk interactive mode Reboot your CentOS machine

$ sudo reboot       
#Resize the physical volume and verify the new size

$ sudo pvresize /dev/sda2

$ sudo pvscan

Take a look at your logical mapping volume to see what volume you want to enlarge, in my case, /dev/mapper/centos-root Resize the file system by adding -r option, it will take care of resizing for you

$ lvextend -r -l +100%FREE /dev/mapper/centos-root

Here you go… You did it…

Facebook Comments

Percona XtraDB Cluster

Percona XtraDB Cluster

The Cluster consists of Nodes. Recommended configuration is to have at least 3 nodes, but you can make it running with 2 nodes as well. Each Node is regular MySQL / Percona Server setup. The point is that you can convert your existing MySQL / Percona Server into Node and roll Cluster using it as a base. Or otherwise – you can detach Node from Cluster and use it as just a regular server. Each Node contains the full copy of data.

Installation Steps

Debian and Ubuntu packages from Percona are signed with a key. Before using the repository, you should add the key to apt

apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A

Create a dedicated Percona repository file /etc/apt/sources.list.d/percona.list(trusty)

deb http://repo.percona.com/apt trusty main
apt-get update
apt-get install percona-xtradb-cluster-56 percona-xtradb-cluster-galera-3.x

You should see something like this if installation successful:

* Starting MySQL (Percona XtraDB Cluster) database server mysqld     [ OK]

Now, edit my.cnf file with below template:(node1)

[mysqld]
datadir=/var/lib/mysql
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Path to Galera library
wsrep_provider=/usr/lib/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://10.X.X.1,10.X.X.2,10.X.X.3
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address 
wsrep_node_address=10.X.X.1
# SST method 
wsrep_sst_method=xtrabackup-v2
wsrep_node_name=node3
# Cluster name
wsrep_cluster_name=db_cluster
# Authentication for SST method
wsrep_sst_auth="billinguser:billingpass"
slow_query_log=1
slow_query_log_file=/var/log/mysqld-slow.log
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

Now you can simply bootstrap (start the first node that will initiate the cluster):

/etc/init.d/mysql bootstrap-pxc
      or
service mysql bootstrap-pxc

Check Status with below commands

SHOW GLOBAL STATUS LIKE 'wsrep_%';
SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
SHOW GLOBAL STATUS LIKE 'wsrep_cluster_status';
SHOW GLOBAL STATUS LIKE 'wsrep_ready';
SHOW GLOBAL STATUS LIKE 'wsrep_connected';
SHOW GLOBAL STATUS LIKE 'wsrep_local_state_comment';
Facebook Comments

Find Public IP

Find Public IP

myip="$(dig +short myip.opendns.com @resolver1.opendns.com)"
echo "My WAN/Public IP address: ${myip}"

More…

curl ifconfig.me
curl icanhazip.com
curl ipecho.net/plain
curl ifconfig.co
Facebook Comments