Skip to content

Ercan Ermis

notes for everyone

Menu
  • AWS
  • Kubernetes
  • Linux
  • DevOps
  • Docker
  • GCP
  • Uncategorized
  • Contact Me
Menu

Let’s Learn Kubernetes – Part 1

Posted on June 12, 2022June 19, 2022 by Ercan

Hi, I’m starting to write a blog post series about Kubernetes basics and this is Part 1. You can follow the guide to understand “What is Kubernetes?” and “What you can do with it?”.

What is Kubernetes?

Kubernetes is an open-source container orchestration tool for automating software deployment, management, and scaling smoothly and fast. The initial release was on June 7, 2014, by Google. Right now, the CNCF (Cloud Native Computing Foundation) is maintaining the project.

The Kubernetes is also called K8s. It’s working with Docker, Containerd, and CRI-O. If you want to use Kubernetes, I suggest you should know Docker before starting.

Basically, I can say, Kubernetes helps you manage containerized applications in different environments. You can run on physical servers (on-premise) and cloud and/or virtual machines as well.

What problems does Kubernetes solve?

When container technologies come through into our lives, the industry hyped “Monolith is old-school and, everybody should use container technologies and, you should separate all services in your application and switch to Microservices”

What does it mean? For example, you created an e-commerce web application and you should run every different part of the application separately. If the user searches for an item, the search mechanism should run by itself or the checkout system should be not dependent on the search mechanism. So, you should separate every different process of your application and divide them from each other.

The Kubernetes allow the run your every kind of different parts of the application. When something goes wrong with the search mechanism, the user also can checkout or go to the shopping cart because they are running individually and you just need to debug and fix the problem on the search mechanism.

You can follow the Microservices architecture on Google Cloud to understand the differences between Monolith and Microservices.

Which features offer Kubernetes?

  • No downtime: It means, you will get a high-availability feature for your application and if something goes wrong in the production, your users can still access the application.-
  • High performance: When you are started to get a heavy traffic load into the application, Kubernetes is scaling up the containers automatically and you don’t have any issues at the CPU, RAM, or Network Bandwidth bottleneck. 
  • Backup and Restore: The Kubernetes is always keeping the last state and when the server is gone or some bad thing happened the infrastructure has to have some kind of backup of the data and restore it.

The Kubernetes Components

Here are the most basic components of the Kubernetes. If you want to run a container with Kubernetes and want to play on your localhost, you should know them even if you are a developer.

Nodes and Pods

Node and Pods

The node is just the server. It’ll be a physical server or any kind of cloud server or virtual machine.

The Pod is the smallest unit of Kubernetes and it’s an abstraction over the container. Pods are running in nodes and they are just containers or containers like Docker containers. Pods are using images and when you run the pod, the pods’ image comes from Docker Hub (https://hub.docker.com).

You don’t have to run your Kubernetes pods with Docker Hub, you can use any kind of public or private container image tool to keep your images.

Here is the important concept for the Kubernetes Nodes and Pods. The pod is usually meant to run one application container in the Node. You can run more than one pod in the node but usually, the case is one main container with a helper container or containers. For example, you can run nginx with redis and monitoring agent pods in the same Node.

The Kubernetes comes with its own network and you can see, that each pod gets its own local IP address like 10.10.10.2, 10.10.0.3, etc. Each pod can communicate using that IP address which is local IP. Kubernetes is also an important ephemeral concept it means that they can die very easily.

If you lost a pod for some reason and when Kubernetes re-cover the pod, the new pod should get a new IP address during the re-creation process. If you are communicating with the database and/or some kind of other services with the local IP, you will get into trouble! There is a really good way to cover this situation. Let’s see…

Service and Ingress

Service: Service can allow for a permanent IP address or static IP address that can be attached to each pod. Every pod has its own service and service means just the permanent or static IP, yes that’s it.

Pod and Service are running individually and when Pod dies and comes up with again or deployed a new version of the pod, the new pod immediately gets an IP of the service because the IP address is staying on the service.

Ingress: When the request comes to the node, first of all, the Ingress part is handling this request and then forwarding it to the service.

ConfigMap and Secrets

ConfigMap: Usually contains the configuration information like database URLs of database or some other services that you use. When the pod is starting, it gets the data that ConfigMap contains. We are most of the time calling them Environment Variable in the industry. ConfigMap is keeping all information as a plain-text and it’s not secure. Don’t store any kind of credentials like username or password in the ConfigMap.

Secrets: Secrets is also just like the ConfigMap but it stores data in base-64 encoded. The built-in security mechanism is not enabled by default on Kubernetes!

Volumes

If you planning to run your database into Kubernetes, you should set a persistent volume because Kubernetes doesn’t manage any data persistence. For example, when the pod is restarted or die, you will lose the all data because all data stayed in the pod and you didn’t mount any storage as a volume to your Kubernetes.

Volumes can be;

  • On a local machine (Node)
  • Remote (Outside of the K8s)
  • Cloud Storage

Now, you know the basics about Kubernetes and I hope you enjoyed with this article! You can follow me on Twitter (https://twitter.com/flightlesstux) to know when Part 2 is coming…


Share on Social Media
twitter facebook linkedin reddit

2 thoughts on “Let’s Learn Kubernetes – Part 1”

  1. Savas says:
    June 12, 2022 at 9:41 pm

    Really clean explanation.

    Reply
    1. Ercan says:
      June 12, 2022 at 9:41 pm

      Thank you!

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • What is an Internet Gateway in AWS?
  • What are Route Tables on AWS VPC?
  • What is Subnet on AWS VPC?
  • What is AWS VPC?
  • Prevent nginx from caching DNS for proxy
  • Scaling PHP Applications on AWS
  • Create an S3 Bucket and Set a Policy via CLI
  • Issue a Let’s Encrypt SSL with the AWS Route53
  • Automate Let’s Encrypt SSL on AWS Application Load Balancer
  • Let’s Learn Kubernetes – Part 3
  • Deploy a website to S3 and CloudFront with Bitbucket Pipelines
  • Let’s Learn Kubernetes – Part 2
  • Protect your AWS Account with specified IPs
  • Let’s Learn Kubernetes – Part 1
  • Differences between AWS CLI v1 and v2
  • SSL CA Problem on CentOS7 Docker and Solution
  • What do I have?
  • Deploy HA nginx to AWS ECS with Geolocation Routing via Terraform
  • Deploy nginx docker to AWS ECS with Terraform Automation
  • How to Install Node Exporter on Linux Server
  • FortiClient Problem on M1 MacBookPro Problem Solution
  • Connect your AWS to GCP with Terraform via IPSec Site-to-Site VPN
  • Google Cloud Platform Automation with Terraform Easily
  • How to secure your Amazon Web Services account
  • Install UGREEN USB Ethernet Adapter on macOS
  • Redirect 301 HTTPS on App Engine with nginx on Google Cloud Platform
  • Set two different Target Groups on AWS Load Balancer with Terraform
  • Extend your ec2 Linux disk without reboot on Amazon Web Services
  • Create a New Grant User on AWS RDS (MariaDB)
  • Amazon S3 CORS Settings with CloudFront on Amazon Web Services
  • Take your GitLab backup everyday if it works in Docker
  • Find large files in CentOS, ubuntu and MacOS easily
  • Fix “Error: rpmdb open failed” on CentOS or Amazon Linux 2
  • Error: No space left on the device when starting/stopping services only
  • Juniper SRX110H-VA VDSL2 Configuration Step by Step
  • Enable Logrotation for Docker Containers
  • Download specific file extension via wget easily on terminal
  • Find the exact size of certain files in Linux via terminal
  • Disable SELinux on CentOS 7 or CentOS 8
  • Hello Blog!

Tag Cloud

active-active amazon linux 2 amazon web services automation aws basics bug centos centos7 cloud cloudfront container containers crud curl deployment devops docker ec2 ecs fargate file size gcp gitlab google cloud platform iam policy k8s kubernetes kubernetes architecture kubernetes basics linux macos network nginx pipeline replicaset route table s3 security terraform ubuntu vpc vpn wget yum

Archive

  • January 2023 (2)
  • December 2022 (2)
  • August 2022 (2)
  • July 2022 (3)
  • June 2022 (6)
  • March 2022 (1)
  • July 2021 (1)
  • May 2021 (5)
  • April 2021 (5)
  • February 2021 (1)
  • January 2021 (1)
  • September 2020 (2)
  • July 2020 (1)
  • April 2020 (1)
  • March 2020 (1)
  • February 2020 (1)
  • November 2019 (5)
©2023 Ercan Ermis | Built using WordPress and Responsive Blogily theme by Superb