Gitlab & Runner Install with Private CA SSL

This installation method is used in AWS EKS Cluster to Install Gitlab and Gitlab Kubernetes Executors. 

Tech stack used in this installations:

  • EKS Cluster(2 Node with )
  • Controller EC2 Instance (To Manage the EKS cluster)
  • Helm (Gitlab Installation)
  • SSL certs(Self-Signed/SSL Provider/Private CA)

EKS Cluster:

Creating EKS cluster is not Part of this Discussion. Please fallow this EKS Cluster creation Doc.

Controller EC2 Instance:

Create Ec2 Instance with Proffered, in this case i am using Amazon Linux AMI.(Make Sure that EKS cluster and Controller in Same VPC.) In-Order to maintain the EKS you need kubectl installed in EC2 and also you need to import the kubeconfg from the Cluster. Lets see how we can do that.

And Also, we will be using helm to Install the Gitlab.

Install Kubectl:

https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/linux/amd64/kubect
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
yum install bash-completion
kubectl version --client

Install Kubectl bash completion:

yum install bash-completion
type _init_completion
source /usr/share/bash-completion/bash_completion
type _init_completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl

Get EKS Cluster list and Import kubeconfig:
(replace the –name with Cluster name)

aws eks update-kubeconfig --name <NAME OF THE EKS CLUSTER >

Install Helm:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
cp /usr/local/bin/helm /usr/bin/

Install Helm Auto completion:

helm completion bash >> ~/.bash_completion
. /etc/profile.d/bash_completion.sh
. ~/.bash_completion
source <(helm completion bash)

Now, EC2 instance is ready for the Gitlab installation. Before going to install the Gitlab in EKS. Let create TLS and Generic Secrets for Gitlab and Gitlab-Runner.

You can use any other SSL provider like(Lets Encrypt, Digicert, Comodo …). Here i am using Self Signed Certificates. You can generate Self Signed Certificates with this Link.

Create TLS Secret for Gitlab’s Helm chart Global Values:

kubectl create secret tls gitlab-self-signed --cert=gitlab.gitlabtesting.com.crt --key=gitlab.gitlabtesting.com.key

Here we created secret name gitlab-self-signed with cert and Key. It is better way of mounting the SSL certificate to Ingress.

Create SSL Generic cert Secret:

This will be used for communication between the Gitlab Server and Gitlab-runner Visa SSL. (IMPORTANT: Make sure the filename you mounting Match with the Domain). in this Case my Domain name is gitlab.gitlabtesting.com.

kubectl create secret generic gitlabsr-runner-certs-secret-3 --from-file=gitlab.gitlabtesting.com.crt=gitlab.gitlabtesting.com.crt

Create service account:(This will be used for gitlab-runner to perform actions)

vim gitlab-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: gitlab-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: gitlab
    namespace: kube-system
kubectl apply -f vim gitlab-serviceaccount.yaml

Now that everything ready lets create vaules.yaml for Gitlab Values.

Example file look below.

Add Gitlab Helm to repo:

certmanager-issuer:
  email: [email protected]
certmanager:
  install: false
gitlab:
  sidekiq:
    resources:
      requests:
        cpu: 50m
        memory: 650M
  webservice:
    ingress:
      tls:
        secretName: gitlab-self-signed #TLS Secret we catered above
    resources:
      requests:
        memory: 1.5G
gitlab-runner:
  install: false
  runners:
    privileged: true
global:
  hosts:
    domain: gitlabtesting.com
  ingress:
    tls:
      enabled: true
registry:
  enabled: false
  install: false
  ingress:
    tls:
      secretName: gitlab-self-signed #TLS Secret we catered above
helm repo add gitlab https://charts.gitlab.io/

Install Gitlab with Helm with Values file we created above:

helm install gitlab gitlab/gitlab -f values.yaml

After 5 min, all the pods will be up. You can check with below command and Also get Root password of Gitlab Login:

kubectl get po


#Get Root password:

kubectl get secret gitlab-gitlab-initial-root-password -ojsonpath='{.data.password}' | base64 --decode ; echo

Now Gitlab Installation Completed. You can access the Gitlab with https://gitlab.gitlabtesting.com

Continues….

Facebook Comments

Kubernetes pods dns issue with kube-flannel.

kubectl -n kube-system logs coredns-6fdfb45d56-2rsxc


.:53
[INFO] plugin/reload: Running configuration MD5 = 8b19e11d5b2a72fb8e63383b064116a1
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:59204->192.168.0.115:53: i/o timeout
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:51845->192.168.0.116:53: i/o timeout
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:49404->192.168.0.115:53: i/o timeout

Debugging DNS Resolution

kubectl exec -ti dnsutils -- nslookup kubernetes.default
Server:    10.0.0.10
Address 1: 10.0.0.10

nslookup: can't resolve 'kubernetes.default'

How to Solve it ?

iptables -P FORWARD ACCEPT
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
systemctl restart docker
systemctl restart kubelet

Apply in Nodes/Master. And check if logs working or not.

Facebook Comments

Docker Swarm Cluster Kong and Konga Cluster (Centos/8)

We are using kong-konga-compose to deploy the Cluster Kong with Konga.

Preparation: Execute below commands on All nodes.

 systemctl stop firewalld
 systemctl disable firewalld
 systemctl status firewalld
 sed -i s/^SELINUX=.*$/SELINUX=permissive/ /etc/selinux/config
 setenforce 0
 yum update -y
 yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
 sudo curl  https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
 sudo yum makecache
 sudo dnf -y install docker-ce
 sudo dnf -y install  git
 sudo systemctl enable --now docker
 sudo curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
 sudo chmod +x /usr/local/bin/docker-compose && ln -sv /usr/local/bin/docker-compose /usr/bin/docker-compose
 sudo docker-compose --version
 sudo docker --version

in node01:

docker swarm init --advertise-addr MASTERNODEIP


OUTPUT:
  
docker swarm join --token SWMTKN-1-1t1u0xijip6l33wdtt7jpq51blwx0hx3t54088xa4bxjy3yx42-90lf5b4nyyw4stbvcqyrde9sf MASTERNODEIP:2377

in node02:

# The command you find in MASTER NODE.
  
  docker swarm join --token SWMTKN-1-1t1u0xijip6l33wdtt7jpq51blwx0hx3t54088xa4bxjy3yx42-90lf5b4nyyw4stbvcqyrde9sf MASTERNODEIP:2377
  

in node01:

docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
m55wcdrkq0ckmtovuxwsjvgl1 *   master01            Ready               Active              Leader              19.03.8
e9igg0l9tru83ygoys5qcpjv2     node01              Ready               Active                                  19.03.8
  

git clone https://github.com/jaganthoutam/kong-konga-compose.git
  
cd kong-konga-compose

docker stack deploy --compose-file=docker-compose-swarm.yaml kong

#Check Services
  
docker service ls
ID                  NAME                  MODE                REPLICAS            IMAGE                             PORTS
ahucq8qru2xx        kong_kong             replicated          1/1                 kong:1.4.3                        *:8000-8001->8000-8001/tcp, *:8443->8443/tcp
bhf0tdd36isg        kong_kong-database    replicated          1/1                 postgres:9.6.11-alpine
tij6peru7tb8        kong_kong-migration   replicated          0/1                 kong:1.4.3
n0gaj0l6jyac        kong_konga            replicated          1/1                 pantsel/konga:latest              *:1337->1337/tcp
83q1eybkhvvy        kong_konga-database   replicated          1/1                 mongo:4.1.5   
Facebook Comments

ElasticSearch Filebeat custom index

Custom Template and Index pattern setup.

    setup.ilm.enabled: false               #Set ilm to False 
    setup.template.name: "k8s-dev"         #Create Custom Template
    setup.template.pattern: "k8s-dev-*"    #Create Custom Template pattern
    setup.template.settings:
      index.number_of_shards: 1    #Set number_of_shards 1, ONLY if you have ONE NODE ES
      index.number_of_replicas: 0#Set number_of_replicas 1, ONLY if you have ONE NODE ES
    output.elasticsearch:
       hosts: ['192.168.1.142:9200']
       index: "k8s-dev-%{+yyyy.MM.dd}" #Set k8s-dev-2020.01.01 as Index name

filebeat-kubernetes.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      node: ${NODE_NAME}
    #      hints.enabled: true
    #      hints.default_config:
    #        type: container
    #        paths:
    #          - /var/log/containers/*${data.kubernetes.container.id}.log

    processors:
      - add_cloud_metadata:
      - add_host_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    setup.ilm.enabled: false
    setup.template.name: "k8s-dev"
    setup.template.pattern: "k8s-dev-*"
    setup.template.settings:
      index.number_of_shards: 1
      index.number_of_replicas: 0

    output.elasticsearch:
       hosts: ['192.168.1.142:9200']
       index: "k8s-dev-%{+yyyy.MM.dd}"

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.6.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: "192.168.1.142"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---

Index will appear in kibana:

Create index pattern in Kibana:

Finally:…

Facebook Comments

Mac cleanup

#user cache file
echo "cleaning user cache file from ~/Library/Caches"
rm -rf ~/Library/Caches/*
echo "done cleaning from ~/Library/Caches"
#user logs
echo "cleaning user log file from ~/Library/logs"
rm -rf ~/Library/logs/*
echo "done cleaning from ~/Library/logs"
#user preference log
echo "cleaning user preference logs"
#rm -rf ~/Library/Preferences/*
echo "done cleaning from /Library/Preferences"
#system caches
echo "cleaning system caches"
sudo rm -rf /Library/Caches/*
echo "done cleaning system cache"
#system logs
echo "cleaning system logs from /Library/logs"
sudo rm -rf /Library/logs/*
echo "done cleaning from /Library/logs"
echo "cleaning system logs from /var/log"
sudo rm -rf /var/log/*
echo "done cleaning from /var/log"
echo "cleaning from /private/var/folders"
sudo rm -rf /private/var/folders/*
echo "done cleaning from /private/var/folders"
#ios photo caches
echo "cleaning ios photo caches"
rm -rf ~/Pictures/iPhoto\ Library/iPod\ Photo\ Cache/*
echo "done cleaning from ~/Pictures/iPhoto Library/iPod Photo Cache"
#application caches
echo "cleaning application caches"
for x in $(ls ~/Library/Containers/) 
do 
    echo "cleaning ~/Library/Containers/$x/Data/Library/Caches/"
    rm -rf ~/Library/Containers/$x/Data/Library/Caches/*
    echo "done cleaning ~/Library/Containers/$x/Data/Library/Caches"
done
echo "done cleaning for application caches"
Facebook Comments

Docker alias

alias dm=’docker-machine’
alias dmx=’docker-machine ssh’
alias dk=’docker’
alias dki=’docker images’
alias dks=’docker service’
alias dkrm=’docker rm’
alias dkl=’docker logs’
alias dklf=’docker logs -f’
alias dkflush=’docker rm `docker ps –no-trunc -aq`’
alias dkflush2=’docker rmi $(docker images –filter “dangling=true” -q –no-trunc)’
alias dkt=’docker stats –format “table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}”‘
alias dkps=”docker ps –format ‘{{.ID}} ~ {{.Names}} ~ {{.Status}} ~ {{.Image}}'”

Facebook Comments

kubectl commands from slack

How cool it is to run the kubectlcommands from slack channel… 🙂
This is not fully developed yet, but it comes in handy with dev, staging ENV.
Let’s Begin.

Requirements:
1. create a new slack bot
2. Create Slack Channel(not private), and get the channel ID.
https://slack.com/api/channels.list?token=REPLACE_TOKEN&pretty=1 to get the Channel ID here.
3. Add Slack-bot to the channel you created.
4. Then use the below file to create K8s Deployment.

---
#create servuce account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubebot-user
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubebot-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
#  Create ClusterRole with Ref NOTE : clsuter-admin is more powerful
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubebot-user
  namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubebot
  labels:
    component: kubebot
spec:
  replicas: 1
  template:
    metadata:
      labels:
        component: kubebot
    spec:
      serviceAccountName: kubebot-user
      containers:
      - name: kubebot
        image: harbur/kubebot:0.1.0
        imagePullPolicy: Always
        env:
        # Create a secret with your slack bot token and reference it here
        - name: KUBEBOT_SLACK_TOKEN
          value: TOKENID_THAT_WAS CREATED
        # Alternatively, use this instead if you don't need to put channel ids in a secret; use a space as a separator
        # You get this from  https://slack.com/api/channels.list?token=REPLACE_TOKEN&pretty=1 
        - name: KUBEBOT_SLACK_CHANNELS_IDS
          value: 'AABBCCDD'          
        # Specify slack admins that kubebot should listen to; use a space as a separator
        - name: KUBEBOT_SLACK_ADMINS_NICKNAMES
          value: 'jag test someone'
        # Specify valid kubectl commands that kubebot should support; use a space as a separator
        - name: KUBEBOT_SLACK_VALID_COMMANDS
          value: "get describe logs explain"

Let me know if this help.

Facebook Comments