/root/.blog

k8s master and nodes on RHEL/Centos 7

Configure the master node

Preparation

  1. Run the following commands to pass bridged IP traffic to iptables chains
[root@test-vm1 ~]# yum update -y
[root@test-vm1 ~]# modprobe br_netfilter

[root@test-vm1 ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@test-vm1 ~]# sysctl --system

2a) Allow the necessary ports trough the firewall when you're working in an unsafe environment or in production

firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=18080/tcp --permanent
firewall-cmd --zone=public --add-port=10254/tcp --permanent
firewall-cmd --reload

2b) If you're just testing this in a safe lab environment you can disable the firewall.

[root@test-vm1 ~]# systemctl stop firewalld && systemctl disable firewalld 
  1. Check if selinux is Enabled with the following command
[root@test-vm1 ~]# sestatus
  1. If the current mode is enforcing then you need to change the mode to permissive or disabled.
[root@test-vm1 ~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux
[root@test-vm1 ~]# setenforce 0
  1. Kubernetes doesn't want to use swap so it can offer the best performance, so we have to disable it.
[root@test-vm1 ~]# swapoff -a
[root@test-vm1 ~]# vi /etc/fstab

#/dev/mapper/centos-swap swap                    swap    defaults        0 0
  1. Add the kubernetes repository to yum
[root@test-vm1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Installation

  1. Install kubeadm and docker
[root@test-vm1 ~]# yum install -y ebtables ethtool docker kubelet kubeadm kubectl
  1. Start docker and enable it at boot
[root@test-vm1 ~]# systemctl start docker && systemctl enable docker
  1. Start kubelet and enable it at boot
[root@test-vm1 ~]# systemctl start kubelet && systemctl enable kubelet
  1. Initialize kubernetes. Be aware, for some pod network implementations you might need to add a specific '--pod-network-cidr=' setting. Please check https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network before continuing. We'll use the weave pod network implementation which doesn't require this.
[root@test-vm1 ~]# kubeadm init
I0715 12:50:01.543998    1958 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0715 12:50:01.577212    1958 kernel_validator.go:81] Validating kernel version
I0715 12:50:01.577289    1958 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [test-vm1.home.lcl kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.221]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [test-vm1.home.lcl localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [test-vm1.home.lcl localhost] and IPs [192.168.1.221 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.502080 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node test-vm1.home.lcl as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node test-vm1.home.lcl as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "test-vm1.home.lcl" as an annotation
[bootstraptoken] using token: e8yb38.htt4pz8dmxq77jha
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.221:6443 --token e8yb38.hqq4pz9dmlq77jha --discovery-token-ca-cert-hash sha256:50b01f19d8060ba593a009d134912d62b95ca80fdbe76f3995c8ba6c4a92c705
  1. Create admin user
[root@test-vm1 ~]# groupadd -g 1000 k8sadm
[root@test-vm1 ~]# useradd -u 1000 -g k8sadm -G wheel k8sadm
[root@test-vm1 ~]# passwd k8sadm
Changing password for user k8sadm.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@test-vm1 ~]# su - k8sadm
[k8sadm@test-vm1 ~]$ mkdir -p $HOME/.kube
[k8sadm@test-vm1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8sadm@test-vm1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. Configure the pod network
[k8sadm@test-vm1 ~]$ kubectl get nodes
NAME                STATUS     ROLES     AGE       VERSION
test-vm1.home.lcl   NotReady   master    2m        v1.11.0
[k8sadm@test-vm1 ~]$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
[k8sadm@test-vm1 ~]$ kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
test-vm1.home.lcl   Ready     master    3m        v1.11.0
[k8sadm@test-vm1 ~]$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-g7rg4                    1/1       Running   0          2h
kube-system   coredns-78fcdf6894-vr4xm                    1/1       Running   0          2h
kube-system   etcd-test-vm1.home.lcl                      1/1       Running   1          2h
kube-system   kube-apiserver-test-vm1.home.lcl            1/1       Running   1          2h
kube-system   kube-controller-manager-test-vm1.home.lcl   1/1       Running   1          2h
kube-system   kube-proxy-524ql                            1/1       Running   1          2h
kube-system   kube-scheduler-test-vm1.home.lcl            1/1       Running   1          2h
kube-system   weave-net-7qxpf                             2/2       Running   0          1m

Configure the worker nodes

  1. Repeat steps 1 to 6 on all worker nodes

  2. Install docker and kubeadm

[root@test-vm2 ~]# yum install -y kubeadm docker kubelet
[root@test-vm3 ~]# yum install -y kubeadm docker kubelet
  1. Start docker and enable it at boot
[root@test-vm2 ~]# systemctl start docker && systemctl enable docker
[root@test-vm3 ~]# systemctl start docker && systemctl enable docker
  1. Start kubelet and enable it at boot
[root@test-vm2 ~]# systemctl start kubelet && systemctl enable kubelet
[root@test-vm3 ~]# systemctl start kubelet && systemctl enable kubelet
  1. Join the workers to the master

use the command kubeadm returned in step 10

[root@test-vm2 ~]# kubeadm join 192.168.1.221:6443 --token e8yb38.hqq4pz9dmlq77jha --discovery-token-ca-cert-hash sha256:50b01f19d8060ba593a009d134912d62b95ca80fdbe76f3995c8ba6c4a92c705
[root@test-vm3 ~]# kubeadm join 192.168.1.221:6443 --token e8yb38.hqq4pz9dmlq77jha --discovery-token-ca-cert-hash sha256:50b01f19d8060ba593a009d134912d62b95ca80fdbe76f3995c8ba6c4a92c705
  1. verify the status

after a little while you will see

[k8sadm@test-vm1 ~]$ kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
test-vm1.home.lcl   Ready     master    26m       v1.11.1
test-vm2.home.lcl   Ready     <none>    1m        v1.11.1
test-vm3.home.lcl   Ready     <none>    1m        v1.11.1

DRAFT FROM HERE ON


Dashboard installation

  1. install the kubernetes dashboard
[k8sadm@test-vm1 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
  1. Create a Cluster Admin service account

You can create a service account with cluster-admin role that will have access to all your resources.

[k8sadm@test-vm1 ~]$ kubectl create serviceaccount cluster-admin-dashboard-sa
[k8sadm@test-vm1 ~]$ kubectl create clusterrolebinding cluster-admin-dashboard-sa \
    --clusterrole=cluster-admin \
    --serviceaccount=default:cluster-admin-dashboard-sa
  1. retrieve the token
[k8sadm@test-vm1 ~]$ kubectl get secret | grep cluster-admin-dashboard-sa
cluster-admin-dashboard-sa-token-mcvgc   kubernetes.io/service-account-token   3         53s

[k8sadm@test-vm1 ~]$ kubectl describe secret cluster-admin-dashboard-sa-token-mcvgc
Name:         cluster-admin-dashboard-sa-token-mcvgc
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=cluster-admin-dashboard-sa
              kubernetes.io/service-account.uid=f7f18f12-8a8a-21e8-9408-cadc25e1acf2

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9OIJik2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXokIJNIjkFzaGJvYXJkLXNhLXRva2VuLW1jdmdjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXoijoijLWFjY291bnQudWlkIjoiZjdmOThmODItOGE4YS0xMWU4LTk0MDgtY2FkYzI1ZTEwY2YyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2x1c3Rlci1hZG1pbi1kYXNoYm9hcmQtc2EifQ.wKDF-perH1pjDYYzhMQXX_dFtntk4jrhAO1MN0wmhYVrMRxeklOVB7jGYuFd6D3oWHMKZlLSioh6W7Acf1rQgEvQthevTlaiJFEmK3TYXAoluf5HJ-DLywYVEt4cuoijijGRSiTuDKjZ6J_hUhRNcT6bsUnN4GQwrPqM72n32cUNk-meOuXC2JSsyzU3qs0VN2_EpLQyVCjwr1DSpYtuwNSzSx7SgtTP2zK-y14pBfu31og7lH8Onkgf6y2eXEOqsOdUTgEt-6TQ2cHqYhlM5y1OZhx8OIJiiG5If0YPIx4MbYPWRKjHQpO_h_wMXJUGGuTDmxw

sources:

  1. pom
  2. pam
  3. https://docs.giantswarm.io/guides/install-kubernetes-dashboard/

udev rules for ASM disks

Make sure you have sg3 utils installed.

# yum install -y sg3_utils

After the LUNs were added to the server run:

# rescan-scsi-bus.sh

This will generate a lot of output and will tell you if it found new disks.
If you've received the wwid's from you SAN administrator you can skip this next stept, if not we'll have to figure out what disks were added using:

# dmesg

Record most (if you asked for 2 LUNs with different sizes, you can note 2 disks with both sizes) disks for further reference. I'm noting:

[1808189.173460] sd 0:0:0:9: [sdak] 209715200 512-byte logical blocks: (107 GB/100 GiB)
[1808189.213339] sd 0:0:0:10: [sdal] 104857600 512-byte logical blocks: (53.6 GB/50.0 GiB)

I will assume you have multipath, if you are blacklisting all luns by default you will also need to modify your multipath configuration. I will not cover this here.

now run

# multipath -ll
...
mpathk (36006016056a04000e9113c6d9189e811) dm-21 DGC     ,VRAID
size=50G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 0:0:0:10 sdal 66:80  active ready running
| `- 1:0:0:10 sdap 66:144 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 0:0:1:10 sdan 66:112 active ready running
  `- 1:0:1:10 sdar 66:176 active ready running
mpathj (36006016056a04000ea81ef4f9189e811) dm-20 DGC     ,VRAID
size=100G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 0:0:1:9  sdam 66:96  active ready running
| `- 1:0:1:9  sdaq 66:160 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 0:0:0:9  sdak 66:64  active ready running
  `- 1:0:0:9  sdao 66:128 active ready running
...

I'm only showing the mpath devices I need. What is now important is the wwid's

  • 36006016056a04000e9113c6d9189e811
  • 36006016056a04000ea81ef4f9189e811

Now we'll edit /etc/udev/rules.d/99-oracle-asmdevices.rules

# vi /etc/udev/rules.d/99-oracle-asmdevices.rules

and add

#100G mpathj asm-data-example
KERNEL=="dm-*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $tempnode", RESULT=="36006016056a04000ea81ef4f9189e811", SYMLINK+="asm-data-example", OWNER="oracle", GROUP="dba", MODE="0660"
 
#50G mpathk asm-fra-example
KERNEL=="dm-*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $tempnode", RESULT=="36006016056a04000e9113c6d9189e811", SYMLINK+="asm-fra-example", OWNER="oracle", GROUP="dba", MODE="0660"

Now, very important, you won't succeed without this:

# partprobe /dev/mapper/mpathk
# partprobe /dev/mapper/mpathj

Last step is to reload the udev config

# udevadm control --reload-rules

Verify our new devices are created:

# ls -lrt /dev/asm*example

lrwxrwxrwx. 1 root root 5 Jul 17 10:38 /dev/asm-fra-example -> dm-21
lrwxrwxrwx. 1 root root 5 Jul 17 10:38 /dev/asm-data-example -> dm-20

Split large file in smaller files

Split

# split -b300M bigfile.zip bigfile.zip.
# ls -al
total 3110156
drwxr-xr-x  2 root root       4096 Sep  6 21:02 .
drwx------ 19 root root       4096 Sep  6 21:01 ..
-rw-r--r--  1 root root 1592381288 Sep  6 21:01 bigfile.zip
-rw-r--r--  1 root root  314572800 Sep  6 21:01 bigfile.zip.aa
-rw-r--r--  1 root root  314572800 Sep  6 21:01 bigfile.zip.ab
-rw-r--r--  1 root root  314572800 Sep  6 21:01 bigfile.zip.ac
-rw-r--r--  1 root root  314572800 Sep  6 21:01 bigfile.zip.ad
-rw-r--r--  1 root root  314572800 Sep  6 21:01 bigfile.zip.ae
-rw-r--r--  1 root root   19517288 Sep  6 21:02 bigfile.zip.af

combine

# cat bigfile.zip.aa bigfile.zip.ab bigfile.zip.ac bigfile.zip.ad \
bigfile.zip.ae bigfile.zip.af > bigfile.zip

Encrypt or Decrypt files

encrypt.sh

#!/bin/bash

infile=$1
outfile=${infile}.enc

if [ -f ${infile} ]; then
    if [ -f ${outfile} ]; then
        echo "target file ${outfile} already exists"
        exit 1
    fi

    printf "Enter encryption password: "
    read pass

    if [ -z ${pass} ]; then
        echo "No password provided, using default: biscuit"
        pass=biscuit
    fi

    cat ${infile} |openssl enc -base64 -e -aes-256-cbc -nosalt -pass pass:${pass} > ${outfile}

fi

decrypt.sh

#!/bin/bash

infile=$1
outfile=`echo ${infile} |sed 's/\.enc//'`

if [ -f ${infile} ]; then
    if [ -f ${outfile} ]; then
        echo "target file ${outfile} already exists"
        exit 1
    fi

    printf "Enter decryption password: "
    read pass

    if [ -z ${pass} ]; then
        echo "No password provided, using default: biscuit"
        pass=biscuit
    fi

    cat ${infile} |openssl enc -base64 -d -aes-256-cbc -nosalt -pass pass:${pass} > ${outfile}

fi
Older Posts