kubectl commands from slack

How cool it is to run the kubectlcommands from slack channel… 🙂
This is not fully developed yet, but it comes in handy with dev, staging ENV.
Let’s Begin.

Requirements:
1. create a new slack bot
2. Create Slack Channel(not private), and get the channel ID.
https://slack.com/api/channels.list?token=REPLACE_TOKEN&pretty=1 to get the Channel ID here.
3. Add Slack-bot to the channel you created.
4. Then use the below file to create K8s Deployment.

---
#create servuce account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubebot-user
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubebot-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
#  Create ClusterRole with Ref NOTE : clsuter-admin is more powerful
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubebot-user
  namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubebot
  labels:
    component: kubebot
spec:
  replicas: 1
  template:
    metadata:
      labels:
        component: kubebot
    spec:
      serviceAccountName: kubebot-user
      containers:
      - name: kubebot
        image: harbur/kubebot:0.1.0
        imagePullPolicy: Always
        env:
        # Create a secret with your slack bot token and reference it here
        - name: KUBEBOT_SLACK_TOKEN
          value: TOKENID_THAT_WAS CREATED
        # Alternatively, use this instead if you don't need to put channel ids in a secret; use a space as a separator
        # You get this from  https://slack.com/api/channels.list?token=REPLACE_TOKEN&pretty=1 
        - name: KUBEBOT_SLACK_CHANNELS_IDS
          value: 'AABBCCDD'          
        # Specify slack admins that kubebot should listen to; use a space as a separator
        - name: KUBEBOT_SLACK_ADMINS_NICKNAMES
          value: 'jag test someone'
        # Specify valid kubectl commands that kubebot should support; use a space as a separator
        - name: KUBEBOT_SLACK_VALID_COMMANDS
          value: "get describe logs explain"

Let me know if this help.

This will return all the details you want to know

select * from information_schema.triggers

or if you want to sort the results of a specific table then you can try

SELECT event_object_table
      ,trigger_name
      ,event_manipulation
      ,action_statement
      ,action_timing
FROM  information_schema.triggers
WHERE event_object_table = 'tableName'
ORDER BY event_object_table
     ,event_manipulation

the following will return table name that has a trigger

select relname as table_with_trigger
from pg_class
where pg_class.oid in (
        select tgrelid
        from pg_trigger
        )

kubernetes Multiline logs for Elasticsearch (Kibana)

If you’re having issues with Kubernetes Multiline logs here is the solution for you.

The Kubernetes autodiscover provider watches for Kubernetes pods to start, update, and stop. These are the available fields on every event:

      • host

     

      • port

     

      • kubernetes.container.id

     

      • kubernetes.container.image

     

      • kubernetes.container.name

     

      • kubernetes.labels

     

      • kubernetes.namespace

     

      • kubernetes.node.name

     

    • kubernetes.pod.name

If the include_annotations config is added to the provider config, then the list of annotations present in the config are added to the event.

By using below Yaml file you can post all K8s logs to ElasticSearch(hosted or in-house).

---

apiVersion: v1

kind: ConfigMap

metadata:

  name: filebeat-config

  namespace: kube-system

  labels:

    k8s-app: filebeat

data:

  filebeat.yml: |-

    filebeat.config:

      inputs:

        # Mounted `filebeat-inputs` configmap:

        path: ${path.config}/inputs.d/*.yml

        # Reload inputs configs as they change:

        reload.enabled: false

      modules:

        path: ${path.config}/modules.d/*.yml

        # Reload module configs as they change:

        reload.enabled: false

    # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:

    #          hints.enabled: true

    filebeat.autodiscover:

      providers:

        - type: kubernetes

          templates:

            - condition:

                or:

                  - equals:

                      kubernetes.namespace: default
              config:

                - type: docker

                  containers.ids:

                    - "${data.kubernetes.container.id}"

                  multiline.pattern: '^[[:space:]]'

                  multiline.negate: false

                  multiline.match: after

    processors:

      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}

    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:

      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9243}']
#use https:// if it is hosted ES

      username: ${ELASTICSEARCH_USERNAME}

      password: ${ELASTICSEARCH_PASSWORD}

---

apiVersion: v1

kind: ConfigMap

metadata:

  name: filebeat-inputs

  namespace: kube-system

  labels:

    k8s-app: filebeat

data:

  kubernetes.yml: |-

    - type: docker

      containers.path: "/var/lib/docker/containers"

      symlinks: true

      containers.ids:

      - "*"

      multiline.pattern: '^DEBUG'

      multiline.negate: true

      multiline.match: 'after'

      processors:

        - add_kubernetes_metadata:

            in_cluster: true

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  name: filebeat

  namespace: kube-system

  labels:

    k8s-app: filebeat

spec:

  template:

    metadata:

      labels:

        k8s-app: filebeat

    spec:

      serviceAccountName: filebeat

      terminationGracePeriodSeconds: 30

      containers:

      - name: filebeat

        image: docker.elastic.co/beats/filebeat:6.4.0

        args: [

          "-c", "/etc/filebeat.yml",

          "-e",

        ]

        env:

        - name: ELASTICSEARCH_HOST

          value: "elasticsearch"    #use https:// if it is hosted ES

        - name: ELASTICSEARCH_PORT

          value: "9243"

        - name: ELASTICSEARCH_USERNAME

          value: "elasticsearch"

        - name: ELASTICSEARCH_PASSWORD

          value: "changeme"

        - name: ELASTIC_CLOUD_ID

          value:

        - name: ELASTIC_CLOUD_AUTH

          value:

        securityContext:

          runAsUser: 0

        resources:

          limits:

            memory: 200Mi

          requests:

            cpu: 100m

            memory: 100Mi

        volumeMounts:

        - name: config

          mountPath: /etc/filebeat.yml

          readOnly: true

          subPath: filebeat.yml

        - name: inputs

          mountPath: /usr/share/filebeat/inputs.d

          readOnly: true

        - name: data

          mountPath: /usr/share/filebeat/data

        - name: varlogpods

          mountPath: /var/log/pods

          readOnly: true

        - name: varlogcontainers

          mountPath: /var/log/containers

        - name: varlibdockercontainers

          mountPath: /var/lib/docker/containers

          readOnly: true

      volumes:

      - name: config

        configMap:

          defaultMode: 0600

          name: filebeat-config

      - name: varlogpods

        hostPath:

          path: /var/log/pods

      - name: varlogcontainers

        hostPath:

          path: /var/log/containers

      - name: varlibdockercontainers

        hostPath:

          path: /var/lib/docker/containers

      - name: inputs

        configMap:

          defaultMode: 0600

          name: filebeat-inputs

      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart

      - name: data

        hostPath:

          path: /var/lib/filebeat-data

          type: DirectoryOrCreate

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

  name: filebeat

subjects:

- kind: ServiceAccount

  name: filebeat

  namespace: kube-system

roleRef:

  kind: ClusterRole

  name: filebeat

  apiGroup: rbac.authorization.k8s.io

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRole

metadata:

  name: filebeat

  labels:

    k8s-app: filebeat

rules:

- apiGroups: [""] # "" indicates the core API group

  resources:

  - namespaces

  - pods

  verbs:

  - get

  - watch

  - list

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: filebeat

  namespace: kube-system

  labels:

    k8s-app: filebeat

---

GitLab setup for Kubernetes

GitLab is the first single application built from the ground up for all stages of the DevOps lifecycle for Product, Development, QA, Security, and Operations teams to work concurrently on the same project. GitLab enables teams to collaborate and work from a single conversation, instead of managing multiple threads across different tools. GitLab provides teams with a single data store, one user interface, and one permission model across the DevOps lifecycle allowing teams to collaborate, significantly reducing cycle time and focus exclusively on building great software quickly.

Here is a quick Tutorial that will teach you how to deploy Gitlab within Kubernetes Environment.

Requirements:

1. Working Kubernetes Cluster

2. Storage Class: We will use it for stateful deployment.

Here Git repository that I used to build Gitlab.

#Clone gitlab_k8s repo

git clone https://github.com/jaganthoutam/gitlab_k8s.git

cd gitlab_k8s

gitlab-pvc.yaml Contains the pv,pvc voulmes for postgres,gitlab,redis..

Here I am using my NFS(nfs01.thoutam.loc) server…

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-gitlab
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-gitlab
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: nfs01.thoutam.loc
    # Exported path of your NFS server
    path: "/mnt/gitlab"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-gitlab-post
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-gitlab-post
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: nfs01.thoutam.loc
    # Exported path of your NFS server
    path: "/mnt/gitlab-post"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-gitlab-redis
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-gitlab-redis
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: nfs01.thoutam.loc
    # Exported path of your NFS server
    path: "/mnt/gitlab-redis"
---

You can use other storage classes based on your cloud Providers.

gitlab-rc.yml Contains Gitlab Deployment config:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gitlab
spec:
  replicas: 1
#  selector:
#    name: gitlab
  template:
    metadata:
      name: gitlab
      labels:
        name: gitlab
    spec:
      containers:
      - name: gitlab
        image: jaganthoutam/gitlab:11.1.4
        env:
        - name: TZ
          value: Asia/Kolkata
        - name: GITLAB_TIMEZONE
          value: Kolkata

        - name: GITLAB_SECRETS_DB_KEY_BASE
          value: long-and-random-alpha-numeric-string  #CHANGE ME
        - name: GITLAB_SECRETS_SECRET_KEY_BASE
          value: long-and-random-alpha-numeric-string #CHANGE ME
        - name: GITLAB_SECRETS_OTP_KEY_BASE
          value: long-and-random-alpha-numeric-string #CHANGE ME


        - name: GITLAB_ROOT_PASSWORD
          value: password               #CHANGE ME
        - name: GITLAB_ROOT_EMAIL
          value: [email protected]        #CHANGE ME

        - name: GITLAB_HOST
          value: gitlab.lb.thoutam.loc  #CHANGE ME
        - name: GITLAB_PORT
          value: "80"
        - name: GITLAB_SSH_PORT
          value: "22"

        - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
          value: "true"
        - name: GITLAB_NOTIFY_PUSHER
          value: "false"

        - name: GITLAB_BACKUP_SCHEDULE
          value: daily
        - name: GITLAB_BACKUP_TIME
          value: 01:00

        - name: DB_TYPE
          value: postgres
        - name: DB_HOST
          value: postgresql
        - name: DB_PORT
          value: "5432"
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production

        - name: REDIS_HOST
          value: redis
        - name: REDIS_PORT
          value: "6379"

        - name: SMTP_ENABLED
          value: "false"
        - name: SMTP_DOMAIN
          value: www.example.com
        - name: SMTP_HOST
          value: smtp.gmail.com
        - name: SMTP_PORT
          value: "587"
        - name: SMTP_USER
          value: [email protected]
        - name: SMTP_PASS
          value: password
        - name: SMTP_STARTTLS
          value: "true"
        - name: SMTP_AUTHENTICATION
          value: login

        - name: IMAP_ENABLED
          value: "false"
        - name: IMAP_HOST
          value: imap.gmail.com
        - name: IMAP_PORT
          value: "993"
        - name: IMAP_USER
          value: [email protected]
        - name: IMAP_PASS
          value: password
        - name: IMAP_SSL
          value: "true"
        - name: IMAP_STARTTLS
          value: "false"
        ports:
        - name: http
          containerPort: 80
        - name: ssh
          containerPort: 22
        volumeMounts:
        - mountPath: /home/git/data
          name: data
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 180
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        persistentVolumeClaim:
#NFS PVC name identifier
          claimName: nfs-gitlab

gitlab-svc.yml contains the gitlab services. I used 80,20 default ports.

apiVersion: v1
kind: Service
metadata:
  name: gitlab
  labels:
    name: gitlab
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: ssh
      port: 22
      targetPort: ssh
  selector:
    name: gitlab

postgresql-rc.yml   Contains Postgres ReplicationController config :

apiVersion: v1
kind: ReplicationController
metadata:
  name: postgresql
spec:
  replicas: 1
  selector:
    name: postgresql
  template:
    metadata:
      name: postgresql
      labels:
        name: postgresql
    spec:
      containers:
      - name: postgresql
        image: jaganthoutam/postgresql:10
        env:
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: DB_EXTENSION
          value: pg_trgm
        ports:
        - name: postgres
          containerPort: 5432
        volumeMounts:
        - mountPath: /var/lib/postgresql
          name: data
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        persistentVolumeClaim:
# NFS PVC identifier
          claimName: nfs-gitlab-post

postgresql-svc.yml contains postgres service config :

apiVersion: v1
kind: Service
metadata:
  name: postgresql
  labels:
    name: postgresql
spec:
  ports:
    - name: postgres
      port: 5432
      targetPort: postgres
  selector:
    name: postgresql

redis-rc.yml contains redis ReplicationController config:

apiVersion: v1
kind: ReplicationController
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    name: redis
  template:
    metadata:
      name: redis
      labels:
        name: redis
    spec:
      containers:
      - name: redis
        image: jaganthoutam/redis
        ports:
        - name: redis
          containerPort: 6379
        volumeMounts:
        - mountPath: /var/lib/redis
          name: data
        livenessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        persistentVolumeClaim:
#NFS PVC identifier
          claimName: nfs-gitlab-redis

redis-svc.yml contain redis service config:

apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    name: redis
spec:
  ports:
    - name: redis
      port: 6379
      targetPort: redis
  selector:
    name: redis

Change the configuration according to your needs and apply using kubectl.

kubectly apply -f .


#and check if your pods are running or not.

root@k8smaster-01:~# kubectl get po
NAME                        READY     STATUS    RESTARTS   AGE
gitlab-589cb45ff4-hch2g     1/1       Running   1          1d
postgres-55f6bcbb99-4x48g   1/1       Running   3          1d
postgresql-v2svn            1/1       Running   4          1d
redis-7r486                 1/1       Running   2          1d

Let me know if this helps you.

Jenkins (copy_reference_file.log Permission denied) issue in K8s

when you Trying to run the default Jenkins image (or jenkinsci/jenkins) with a persistent volume mounted (nfs) to /var/jenkins_home will currently fail:

root@k8smaster-01:/etc/kubernetes# kubectl logs jenkins-7786b4f8b6-cqf6q

touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied

Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?

Above error can be fixed by adding securityContext in Jenkins Deployment.

apiVersion: v1
kind: Service
metadata:
  name: jenkins
spec:
  type: NodePort
  ports:
    - port: 8080
      targetPort: 8080
  selector:
    app: jenkins
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      securityContext:
        fsGroup: 1000 
        runAsUser: 0
      containers:
        - name: jenkins
          image: jaganthoutam/jenkins:2.0
          ports:
            - name: httpport
              containerPort: 8080
            - name: jnlpport
              containerPort: 50000
          volumeMounts:
            - name: nfs-jenkins
              mountPath: "/var/jenkins_home"
      volumes:
      - name: nfs-jenkins
        persistentVolumeClaim:
          claimName: nfs-jenkins
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-jenkins
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-jenkins
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: nfs01.thoutam.loc
    # Exported path of your NFS server
    path: "/mnt/jenkins"

Some useful git alias

Here are useful git alias commands…

g=git

ga='git add'

gaa='git add --all'

gap='git apply'

gapa='git add --patch'

gau='git add --update'

gb='git branch'

gba='git branch -a'

gbd='git branch -d'

gbda='git branch --no-color --merged | command grep -vE "^(\*|\s*(master|develop|dev)\s*$)" | command xargs -n 1 git branch -d'

gbl='git blame -b -w'

gbnm='git branch --no-merged'

gbr='git branch --remote'

gbs='git bisect'

gbsb='git bisect bad'

gbsg='git bisect good'

gbsr='git bisect reset'

gbss='git bisect start'

gc='git commit -v'

'gc!'='git commit -v --amend'

gca='git commit -v -a'

'gca!'='git commit -v -a --amend'

gcam='git commit -a -m'

'gcan!'='git commit -v -a --no-edit --amend'

'gcans!'='git commit -v -a -s --no-edit --amend'

gcb='git checkout -b'

gcd='git checkout develop'

gcf='git config --list'

gcl='git clone --recursive'

gclean='git clean -fd'

gcm='git checkout master'

gcmsg='git commit -m'

'gcn!'='git commit -v --no-edit --amend'

gco='git checkout'

gcount='git shortlog -sn'

gcp='git cherry-pick'

gcpa='git cherry-pick --abort'

gcpc='git cherry-pick --continue'

gcs='git commit -S'

gcsm='git commit -s -m'

gd='git diff'

gdca='git diff --cached'

gdct='git describe --tags `git rev-list --tags --max-count=1`'

gdcw='git diff --cached --word-diff'

gdt='git diff-tree --no-commit-id --name-only -r'

gdw='git diff --word-diff'

gf='git fetch'

gfa='git fetch --all --prune'

gfo='git fetch origin'

gg='git gui citool'

gga='git gui citool --amend'

ggpull='git pull origin $(git_current_branch)'

ggpush='git push origin $(git_current_branch)'

ggsup='git branch --set-upstream-to=origin/$(git_current_branch)'

ghh='git help'

gignore='git update-index --assume-unchanged'

gignored='git ls-files -v | grep "^[[:lower:]]"'

git-svn-dcommit-push='git svn dcommit && git push github master:svntrunk'

gk='\gitk --all --branches'

gke='\gitk --all $(git log -g --pretty=%h)'

gl='git pull'

glg='git log --stat'

glgg='git log --graph'

glgga='git log --graph --decorate --all'

glgm='git log --graph --max-count=10'

glgp='git log --stat -p'

glo='git log --oneline --decorate'

glod='git log --graph --pretty='\''%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%ad) %C(bold blue)<%an>%Creset'\'

glods='git log --graph --pretty='\''%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%ad) %C(bold blue)<%an>%Creset'\'' --date=short'

glog='git log --oneline --decorate --graph'

gloga='git log --oneline --decorate --graph --all'

glol='git log --graph --pretty='\''%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset'\'

glola='git log --graph --pretty='\''%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset'\'' --all'

glp=_git_log_prettily

glum='git pull upstream master'

gm='git merge'

gma='git merge --abort'

gmom='git merge origin/master'

gmt='git mergetool --no-prompt'

gmtvim='git mergetool --no-prompt --tool=vimdiff'

gmum='git merge upstream/master'

gp='git push'

gpd='git push --dry-run'

gpoat='git push origin --all && git push origin --tags'

gpristine='git reset --hard && git clean -dfx'

gpsup='git push --set-upstream origin $(git_current_branch)'

gpu='git push upstream'

gpv='git push -v'

gr='git remote'

gra='git remote add'

grb='git rebase'

grba='git rebase --abort'

grbc='git rebase --continue'

grbd='git rebase develop'

grbi='git rebase -i'

grbm='git rebase master'

grbs='git rebase --skip'

grep='grep  --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}'

grh='git reset'

grhh='git reset --hard'

grmv='git remote rename'

grrm='git remote remove'

grset='git remote set-url'

grt='cd $(git rev-parse --show-toplevel || echo ".")'

gru='git reset --'

grup='git remote update'

grv='git remote -v'

gsb='git status -sb'

gsd='git svn dcommit'

gsi='git submodule init'

gsps='git show --pretty=short --show-signature'

gsr='git svn rebase'

gss='git status -s'

gst='git status'

gsta='git stash save'

gstaa='git stash apply'

gstc='git stash clear'

gstd='git stash drop'

gstl='git stash list'

gstp='git stash pop'

gsts='git stash show --text'

gsu='git submodule update'

gts='git tag -s'

gtv='git tag | sort -V'

gunignore='git update-index --no-assume-unchanged'

gunwip='git log -n 1 | grep -q -c "\-\-wip\-\-" && git reset HEAD~1'

gup='git pull --rebase'

gupv='git pull --rebase -v'

gwch='git whatchanged -p --abbrev-commit --pretty=medium'

gwip='git add -A; git rm $(git ls-files --deleted) 2> /dev/null; git commit --no-verify -m "--wip-- [skip ci]"'

 

Categories git

kubernetes cluster in VirtualBox(Ubuntu 16.04)

kubernetes cluster in VirtualBox(Ubuntu 16.04)

First Let’s start installing the Docker:

#Remove if you have older version
sudo apt-get remove docker docker-engine docker.ioapt autoremove

sudo apt-get update

#Add Docker’s official GPG key:
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
 
#Verify that you now have the key with the fingerprint
sudo apt-key fingerprint 0EBFCD88

# Add x86_64 / amd64 stable repo
sudo add-apt-repository    "deb [arch=amd64] \ https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
   stable"

sudo apt-get update

#Install Docker-ce now.
sudo apt-get install docker-ce -y

If you are manually adding a key from a PPA, use

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

Add kubernetes deb repository for Ubuntu 16.04

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list 
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

Update and install Docker, kubelet, kubeadm  and kubectl

apt-get update
apt-get install ebtables ethtool docker.io apt-transport-https curl
apt-get install -y kubelet kubeadm kubectl

Starting with Kubernetes v1.8.0, the Kubelet will fail to start up if the nodes have swap memory enabled. Discussion around why swap is not supported can be found in this issue.

Before performing an installation, you must disable swap memory on your nodes. If you want to run with swap memory enabled, you can override the Kubelet configuration in the plan file.

If you are performing an upgrade and you have swap enabled, you will have to decide whether you want to disable swap on all your nodes. If not, you must override the kubelet configuration to allow swap.

Override Kubelet Configuration

If you want to run your cluster nodes with swap memory enabled, you can override the Kubelet configuration in the plan file:

cluster:
  # ...
  kubelet:
    option_overrides:
      fail-swap-on: false

Enable bridge-nf-call tables

vim /etc/ufw/sysctl.conf  
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1

Create the tocken from “kubeadm token create” TOKEN EXPIRE after sometime so ready to create one…

kubeadm join --token 7be225.9524040d34451e07 192.168.1.30:6443 --discovery-token-ca-cert-hash sha256:ade14df7b994c8eb0572677e094d3ba835bec37b33a5c2cadabf6e5e3417a522
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Cluster ready for deployments now… You SSH to master and deploy micro-services…

Kops AWS infra Automation

 

This example Project will help you to create KOPs cluster on multiple AZ but limited to the Single region.

Assume that you have AWS CLI installed and IAM user configured.

The IAM user to create the Kubernetes cluster must have the following permissions:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • IAMFullAccess
  • AmazonVPCFullAccess

Pre-requirements:

  1. Terraform (Note you need to install 0.11.7 Version) https://www.terraform.io/downloads.html
  2. Install kops (WE ARE USING kops 1.8.1 for now) https://github.com/kubernetes/kops

For Mac

brew update && brew install kops

OR from GITHUB

curl -Lo kops https://github.com/kubernetes/kops/releases/download/1.8.1/kops-darwin-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/

For Linux

wget -O kops https://github.com/kubernetes/kops/releases/download/1.8.1/kops-linux-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/
  1. Install kubectl https://kubernetes.io/docs/tasks/tools/install-kubectl/

For Mac

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.11/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

For Ubuntu

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.11/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Getting started

Replace with your public zone name

vim example/variables.tf
variable "domain_name" {
  default = "k8s.thoutam.com"
}

Edit cluster details. node_asg_desired,instance_key_name etc..

vim example/kops_clusters.tf

**** Edit module according to insfra name *****

module "staging" {
  source                    = "../module"
  source                    = "./"
  kubernetes_version        = "1.8.11"
  sg_allow_ssh              = "${aws_security_group.allow_ssh.id}"
  sg_allow_http_s           = "${aws_security_group.allow_http.id}"
  cluster_name              = "staging"
  cluster_fqdn              = "staging.${aws_route53_zone.k8s_zone.name}"
  route53_zone_id           = "${aws_route53_zone.k8s_zone.id}"
  kops_s3_bucket_arn        = "${aws_s3_bucket.kops.arn}"
  kops_s3_bucket_id         = "${aws_s3_bucket.kops.id}"
  vpc_id                    = "${aws_vpc.main_vpc.id}"
  instance_key_name         = "${var.key_name}"
  node_asg_desired          = 3
  node_asg_min              = 3
  node_asg_max              = 3
  master_instance_type      = "t2.medium"
  node_instance_type        = "m4.xlarge"
  internet_gateway_id       = "${aws_internet_gateway.public.id}"
  public_subnet_cidr_blocks = ["${local.staging_public_subnet_cidr_blocks}"]
  kops_dns_mode             = "private"
}

If you want Force single master. (Can be used when a master per AZ is not required or if running in a region with only 2 AZs).

vim module/variables.tf 

**** force_single_master should be true if you want single master ****

variable "force_single_master" {
   default = true
  }

ALl good now. You can run Terraform plan to see if you get any errors. If everything clean just run “terraform apply” to build cluster.

cd example
terrafrom plan

(Output something like below)
  ......
  ......
  
  + module.staging.null_resource.delete_tf_files
      id:                                                 <computed>


Plan: 6 to add, 0 to change, 1 to destroy.

------------------------------------------------------------------------
  
  ......
  ......

MASTER_ELB_CLUSTER1=$(terraform state show module.staging.aws_elb.master | grep dns_name | cut -f2 -d= | xargs)
kubectl config set-cluster staging.k8s.thoutam.com --insecure-skip-tls-verify=true --server=https://$MASTER_ELB_CLUSTER1

And then test:

kubectl cluster-info
Kubernetes master is running at https://staging-master-999999999.eu-west-1.elb.amazonaws.com
KubeDNS is running at https://staging-master-999999999.eu-west-1.elb.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns/proxy

kubectl get nodes
NAME                                          STATUS    ROLES     AGE       VERSION
ip-172-20-25-99.eu-west-1.compute.internal    Ready     master    9m        v1.8.11
ip-172-20-26-11.eu-west-1.compute.internal    Ready     node      3m        v1.8.11
ip-172-20-26-209.eu-west-1.compute.internal   Ready     node      27s       v1.8.11
ip-172-20-27-107.eu-west-1.compute.internal   Ready     node      2m        v1.8.11

Credits: Original code is taken from here.