Kubernetes pods dns issue with kube-flannel.

kubectl -n kube-system logs coredns-6fdfb45d56-2rsxc


.:53
[INFO] plugin/reload: Running configuration MD5 = 8b19e11d5b2a72fb8e63383b064116a1
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:59204->192.168.0.115:53: i/o timeout
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:51845->192.168.0.116:53: i/o timeout
[ERROR] plugin/errors: 2 1898610492461102613.3835327825105568521. HINFO: read udp 10.244.2.28:49404->192.168.0.115:53: i/o timeout

Debugging DNS Resolution

kubectl exec -ti dnsutils -- nslookup kubernetes.default
Server:    10.0.0.10
Address 1: 10.0.0.10

nslookup: can't resolve 'kubernetes.default'

How to Solve it ?

iptables -P FORWARD ACCEPT
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
systemctl restart docker
systemctl restart kubelet

Apply in Nodes/Master. And check if logs working or not.

Facebook Comments

Docker Swarm Cluster Kong and Konga Cluster (Centos/8)

We are using kong-konga-compose to deploy the Cluster Kong with Konga.

Preparation: Execute below commands on All nodes.

 systemctl stop firewalld
 systemctl disable firewalld
 systemctl status firewalld
 sed -i s/^SELINUX=.*$/SELINUX=permissive/ /etc/selinux/config
 setenforce 0
 yum update -y
 yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
 sudo curl  https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
 sudo yum makecache
 sudo dnf -y install docker-ce
 sudo dnf -y install  git
 sudo systemctl enable --now docker
 sudo curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
 sudo chmod +x /usr/local/bin/docker-compose && ln -sv /usr/local/bin/docker-compose /usr/bin/docker-compose
 sudo docker-compose --version
 sudo docker --version

in node01:

docker swarm init --advertise-addr MASTERNODEIP


OUTPUT:
  
docker swarm join --token SWMTKN-1-1t1u0xijip6l33wdtt7jpq51blwx0hx3t54088xa4bxjy3yx42-90lf5b4nyyw4stbvcqyrde9sf MASTERNODEIP:2377

in node02:

# The command you find in MASTER NODE.
  
  docker swarm join --token SWMTKN-1-1t1u0xijip6l33wdtt7jpq51blwx0hx3t54088xa4bxjy3yx42-90lf5b4nyyw4stbvcqyrde9sf MASTERNODEIP:2377
  

in node01:

docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
m55wcdrkq0ckmtovuxwsjvgl1 *   master01            Ready               Active              Leader              19.03.8
e9igg0l9tru83ygoys5qcpjv2     node01              Ready               Active                                  19.03.8
  

git clone https://github.com/jaganthoutam/kong-konga-compose.git
  
cd kong-konga-compose

docker stack deploy --compose-file=docker-compose-swarm.yaml kong

#Check Services
  
docker service ls
ID                  NAME                  MODE                REPLICAS            IMAGE                             PORTS
ahucq8qru2xx        kong_kong             replicated          1/1                 kong:1.4.3                        *:8000-8001->8000-8001/tcp, *:8443->8443/tcp
bhf0tdd36isg        kong_kong-database    replicated          1/1                 postgres:9.6.11-alpine
tij6peru7tb8        kong_kong-migration   replicated          0/1                 kong:1.4.3
n0gaj0l6jyac        kong_konga            replicated          1/1                 pantsel/konga:latest              *:1337->1337/tcp
83q1eybkhvvy        kong_konga-database   replicated          1/1                 mongo:4.1.5   
Facebook Comments

ElasticSearch Filebeat custom index

Custom Template and Index pattern setup.

    setup.ilm.enabled: false               #Set ilm to False 
    setup.template.name: "k8s-dev"         #Create Custom Template
    setup.template.pattern: "k8s-dev-*"    #Create Custom Template pattern
    setup.template.settings:
      index.number_of_shards: 1    #Set number_of_shards 1, ONLY if you have ONE NODE ES
      index.number_of_replicas: 0#Set number_of_replicas 1, ONLY if you have ONE NODE ES
    output.elasticsearch:
       hosts: ['192.168.1.142:9200']
       index: "k8s-dev-%{+yyyy.MM.dd}" #Set k8s-dev-2020.01.01 as Index name

filebeat-kubernetes.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      node: ${NODE_NAME}
    #      hints.enabled: true
    #      hints.default_config:
    #        type: container
    #        paths:
    #          - /var/log/containers/*${data.kubernetes.container.id}.log

    processors:
      - add_cloud_metadata:
      - add_host_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    setup.ilm.enabled: false
    setup.template.name: "k8s-dev"
    setup.template.pattern: "k8s-dev-*"
    setup.template.settings:
      index.number_of_shards: 1
      index.number_of_replicas: 0

    output.elasticsearch:
       hosts: ['192.168.1.142:9200']
       index: "k8s-dev-%{+yyyy.MM.dd}"

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.6.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: "192.168.1.142"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---

Index will appear in kibana:

Create index pattern in Kibana:

Finally:…

Facebook Comments

Mac cleanup

#user cache file
echo "cleaning user cache file from ~/Library/Caches"
rm -rf ~/Library/Caches/*
echo "done cleaning from ~/Library/Caches"
#user logs
echo "cleaning user log file from ~/Library/logs"
rm -rf ~/Library/logs/*
echo "done cleaning from ~/Library/logs"
#user preference log
echo "cleaning user preference logs"
#rm -rf ~/Library/Preferences/*
echo "done cleaning from /Library/Preferences"
#system caches
echo "cleaning system caches"
sudo rm -rf /Library/Caches/*
echo "done cleaning system cache"
#system logs
echo "cleaning system logs from /Library/logs"
sudo rm -rf /Library/logs/*
echo "done cleaning from /Library/logs"
echo "cleaning system logs from /var/log"
sudo rm -rf /var/log/*
echo "done cleaning from /var/log"
echo "cleaning from /private/var/folders"
sudo rm -rf /private/var/folders/*
echo "done cleaning from /private/var/folders"
#ios photo caches
echo "cleaning ios photo caches"
rm -rf ~/Pictures/iPhoto\ Library/iPod\ Photo\ Cache/*
echo "done cleaning from ~/Pictures/iPhoto Library/iPod Photo Cache"
#application caches
echo "cleaning application caches"
for x in $(ls ~/Library/Containers/) 
do 
    echo "cleaning ~/Library/Containers/$x/Data/Library/Caches/"
    rm -rf ~/Library/Containers/$x/Data/Library/Caches/*
    echo "done cleaning ~/Library/Containers/$x/Data/Library/Caches"
done
echo "done cleaning for application caches"
Facebook Comments

Docker alias

alias dm=’docker-machine’
alias dmx=’docker-machine ssh’
alias dk=’docker’
alias dki=’docker images’
alias dks=’docker service’
alias dkrm=’docker rm’
alias dkl=’docker logs’
alias dklf=’docker logs -f’
alias dkflush=’docker rm `docker ps –no-trunc -aq`’
alias dkflush2=’docker rmi $(docker images –filter “dangling=true” -q –no-trunc)’
alias dkt=’docker stats –format “table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}”‘
alias dkps=”docker ps –format ‘{{.ID}} ~ {{.Names}} ~ {{.Status}} ~ {{.Image}}'”

Facebook Comments

kubectl commands from slack

How cool it is to run the kubectlcommands from slack channel… 🙂
This is not fully developed yet, but it comes in handy with dev, staging ENV.
Let’s Begin.

Requirements:
1. create a new slack bot
2. Create Slack Channel(not private), and get the channel ID.
https://slack.com/api/channels.list?token=REPLACE_TOKEN&pretty=1 to get the Channel ID here.
3. Add Slack-bot to the channel you created.
4. Then use the below file to create K8s Deployment.

---
#create servuce account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubebot-user
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubebot-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
#  Create ClusterRole with Ref NOTE : clsuter-admin is more powerful
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubebot-user
  namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubebot
  labels:
    component: kubebot
spec:
  replicas: 1
  template:
    metadata:
      labels:
        component: kubebot
    spec:
      serviceAccountName: kubebot-user
      containers:
      - name: kubebot
        image: harbur/kubebot:0.1.0
        imagePullPolicy: Always
        env:
        # Create a secret with your slack bot token and reference it here
        - name: KUBEBOT_SLACK_TOKEN
          value: TOKENID_THAT_WAS CREATED
        # Alternatively, use this instead if you don't need to put channel ids in a secret; use a space as a separator
        # You get this from  https://slack.com/api/channels.list?token=REPLACE_TOKEN&pretty=1 
        - name: KUBEBOT_SLACK_CHANNELS_IDS
          value: 'AABBCCDD'          
        # Specify slack admins that kubebot should listen to; use a space as a separator
        - name: KUBEBOT_SLACK_ADMINS_NICKNAMES
          value: 'jag test someone'
        # Specify valid kubectl commands that kubebot should support; use a space as a separator
        - name: KUBEBOT_SLACK_VALID_COMMANDS
          value: "get describe logs explain"

Let me know if this help.

Facebook Comments

trigger(s) associated with a view or a table in PostgreSQL

This will return all the details you want to know

select * from information_schema.triggers

or if you want to sort the results of a specific table then you can try

SELECT event_object_table
      ,trigger_name
      ,event_manipulation
      ,action_statement
      ,action_timing
FROM  information_schema.triggers
WHERE event_object_table = 'tableName'
ORDER BY event_object_table
     ,event_manipulation

the following will return table name that has a trigger

select relname as table_with_trigger
from pg_class
where pg_class.oid in (
        select tgrelid
        from pg_trigger
        )
Facebook Comments