DevOps/Kubernetes/Installation

Installation
Kubernetes as of April 2019 can be installed in more that 40 differentes ways and in particular can be installed using your Linux distribution packages or using Kubernetes upstream version. It is also possible to use any of Kubernetes managed solution offered by Cloud Computing provider like /EKS/ from AWS, Google Kubernetes Engine (GKE) in Google Cloud Platform (GCP) or GKE on-prem or also some CI/CD tools like Jenkins X and GitLab that support integration with different Kubernetes Cloud providers.

=== Install Kubernetes on Debian/Ubuntu using upstream ===


 * Our first step is to download and add the key for the Kubernetes and docker install. Back at the terminal, issue the following command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable"
 * Add the Docker Repository on all your servers:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
 * Add the Kubernetes repository in your apt source.list on all your servers.

sudo apt-get update sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00 sudo apt-mark hold docker-ce kubelet kubeadm kubectl
 * And now, Install Docker, /kubeadm/, /kubelet/, and /kubectl/ on all your servers.

Initialize your master
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf sudo sysctl -p
 * Enable net.bridge.bridge-nf-call-iptables on all your nodes.


 * On only the Kube Master server, initialize the cluster and configure kubectl.

When this completes, you'll be presented with the exact command you need to join the nodes to the master. In case you make any mistake and want to undo your changes you can use:  command.


 * Before you join a node, you need to issue the following commands:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
 * Install the flannel networking plugin in the cluster by running this command on the Kube Master server.

sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash
 * The  command that you ran on the master should output a   command containing a token and hash. You will need to copy that command from the master and run it on both worker nodes with sudo.

kubectl get nodes
 * Now you are ready to verify that the cluster is up and running. On the Kube Master server, check the list of nodes.

NAME                     STATUS   ROLES    AGE   VERSION wboyd1c.mylabserver.com  Ready    master   54m   v1.12.2 wboyd2c.mylabserver.com  Ready      49m   v1.12.2 wboyd3c.mylabserver.com  Ready      49m   v1.12.2

Containers and Pods
Pods are the smallest and most basic building block of the Kubernetes model. A pod consist of one or more containers storage resources, and a unique IP address in the Kubernetes cluster network.

In order to run containers, Kubernetes schedules pods to run on servers in the cluster. When a pod is scheduled the server will run the containers that are part of that pod.

Create a simple pod running an nginx container, for more configuration options check Kubernetes Pod official documentation : apiVersion: v1 kind: Pod metadata: name: MyNginxPod spec: containers: - name: MyNginxContainer image: nginx
 * Create a basic Pod file definition with your container image:
 * Create Pod:

kubectl get pods
 * Get a list of pods and verify that your new nginx pod is in the Running state:

kubectl describe pod nginx
 * Get more information about your nginx pod:

kubectl delete pod nginx
 * Delete the pod:

See also ReplicaSet concept.

Clustering and Nodes
Kubernetes implements a clustered architecture. In a typical production environment, you will have multiple servers that are able to run your workloads (containers) These servers which actually run the containers are called nodes. A kubernetes cluster has one or more control servers which manage and control the cluster and host the kubernetes API. These control server are usually separate from worker nodes, which run applications within the cluster.


 * Get a list of nodes:

kubectl describe node $node_name
 * Get more information about a specific node:

Networking in Kubernetes
The Kubernetes networking model involves creating a virtual network across the whole cluster. This means that every pod on the cluster has a unique IP address, and can communicate with any other pod in the cluster, even if that other pod is running on a different node.

Kubernetes supports a variety of networking plugins that implements this model in various ways. One of the most popular and easy-to-use is Flannel, although as of April 2019 do not support network policies.

cat << EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.15.4 ports: - containerPort: 80 EOF
 * Create a deployment with two nginx pods:

cat << EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - name: busybox image: radial/busyboxplus:curl args: - sleep - "1000" EOF
 * Create a busybox pod to use for testing:

kubectl get pods -o wide
 * Get the IP addresses of your pods:

kubectl exec busybox -- curl $nginx_pod_ip
 * Get the IP address of one of the nginx pods, then contact that nginx pod from the busybox pod using the nginx pod's IP address: