Cheat sheet for Openshift and foundational information
Getting into openshift learning can be a handful. So I have made a quick summary on basic architecture overview and user experience, including definitions of various concepts. I also suggest anyone to try the Openshift demo lab which is available through their website.
Architecture overview:
Overview - The basics on what the managed container orchestration platform is all about, and some pre-requisites like programming language support and compliance (FIPS). We also learn that we can break it into Compute, Network and Storage just like any infrastructure design.
Product and Infrastructure Providers - You can deploy openshift with Full Stack Automation (IPI) or Pre-existing Infrastructure (UPI). Certain vendors allow you to deploy in these ways, and IPI is rather new with OpenShift4.5 starting the full stack automated install experience with VMware vSphere infrastructure. (details)
Openshift itself is supported anywhere RHEL runs.
For users needing DIY or lower end capabilities, the light weight Openshift 4 Kubernetes Engine provides core Kubernetes functionalities, without having the whole platform.
Architecture and Concepts -
Nodes: are RHEL or RHCOS instances with Openshift installed (Master nodes use RHCOS). From the application binary (images), containers can run.
Pod: is the smallest compute unit. Basically, Kubernetes needs a linux-based OS to run on, and Openshift runs on top of Kubernetes.
etcd: holds the ‘desired and current state’ which is essential for openshift to do the auto-correcting.
Core Kubernetes provides: Kubernetes API server, scheduler and Cluster management services. Meanwhile Openshift provides: Openshift API server, OLM and web console.
Authentication: through role-based access control (RBAC) and other external identity management systems. All users access Openshift through same standard interface.
Masters: monitor pod health and automatically scale / restart pods.
Scheduler: Uses real-world topology like regions and zones. Responsible for determining pod placement.
Container registry: Comes included in this managed Kubernetes offering: Openshift Container Platform.
Application Data: containers are ephemeral. Can use persistent storage which allows use of stateful apps.
Routing layer: gives access to apps inside Openshift for external clients.
ReplicaSet: used by Kubernetes, and Openshift uses Replication controller. Helps ensure the required number of pod replicas run at all times
Deployment: define how to roll out new versions of pods, and creates ReplicaSets or replication controllers
Operators -
Pods with operator code that interact with Kubernetes API server, and makes sure specified state of objects are acheived. It is a way of building an application and driving an application on top of Kubernetes, behind Kubernetes APIs. It has an SDK, Lifecycle Manager (OLM) and Metering in the Framework toolkit.
Operators are shared in the OperatorHub.io community.
Networking Workflow -
Routes help exposes services, making it reachable externally with a host name.
Routers are different to routes. Router container can run on any node host, and is the routing layer required to reach applications from outside Openshift environment.
A route object consists of a host name, a route name, a service selector, and optional security configurations. An OpenShift router consumes routes and is a deployment of one or more pods that forwards network traffic to the proper pods.
For internal communication there is pod connectivity, open vSwitch to route packer to another OpenShift node hosting target container.
Services have DNS names internal to Openshift, and helps with loadbalancing too.
Container Deployment Workflow -
New application is requested via CLI, web console or API using Openshift API/authentication, scheduler and worker node.
User Experience with Openshift:
Web Console -
Administrator and Developer perspectives are available. Developer perspective is application-centric, and you can view projects, set project access and view metrics.
Administrator view is for project and cluster admins. Advanced settings: updates, operators, CRDs, role bindings, resource quotas.
Users and Projects -
Projects restrict and track use of resources with quotas and limits. An OpenShift project is an alternative representation of a Kubernetes namespace. A project does have a few additional features on top of basic features of a namespace, like managing resource access for regular users and ability to limit community resource consumption.
Basically, namespaces are not available to anyone. An administrator needs to give access to users.
Each project has objects, policies, constraints and service accounts.
You can have regular users and system users.
Quotas and Limits -
Set limits on number of objects in project, amount of compute/memory/storage resources, and can use labels to set the boundaries.
LimitRanges help schedulers to assign pods to nodes with limits.
You can view quota from console. When you go over, error message shows current system usage statistics.
Logging and Monitoring -
Container log aggregation with EFK stack -
Good for logs aggregated from hosts and applications, used by cluster admin. Made of 3 components:
- Elasticsearch: Object store where all logs are stored
- Fluentd: Gathers logs from nodes, feeds them to Elasticsearch
- Kibana: Web UI for Elasticsearch
Metrics collection and alerting with prometheus -
Configure horizontal pod autoscaling from prometheus.
Templates, Operators, and Helm 3 -
Templates are not associated with pods, and can only create resources.
Operators are implemented by operator pods and can create, manage and delete resources.
Helm 3 are package of templates and how application are packaged
With templates, you can create instantly deployable applications for developers or customers. Use labels, parameters and describe set of objects for Openshift to create.
With Operator, you can deploy and manage application in cluster.
Helm charts are similar to templates, it is a collection of files that describe related set of Kubernetes resources.
Operators are pods that take advantage of custom resource definitions (CRDs). CRDs allow extension of Kubernetes/ Openshift API, and allow creation of Custom Resources (CRs). Operator then creates all OpenShift resources that make up application.
Application Development with openshift:
Main goal of DevOps: speed up and increase flexibility for delivering new features and services.
Redhat is build on openness and transparency, it has tools, platforms and infrastructure to enable DevOps for customers!
Deployment:
The configuration written on YAML file (user defined template).
In my previous blog on Kubernetes I wrote: The pod abstracts away the container, another system call ReplicaSet manages pods, and Deployments manage ReplicaSets. ReplicaSets in openshift are called Replication Controllers, and Deployments are called Deployment Configurations. Deployments describe the desired state for a particular component of the application.
Version controlled and roll back to previous version is possible if latest template fails.
Deployment Configuration defines a Deployment Strategy: This supports a variety of deployment scenarios including availability.
Rolling deployment Strategy: Uses Canary deployment model. New instances are always tested before old instances are replaced, so the new instances need to pass the deployment test. It is also the default for using Openshift. However you need to be able to run both old and new code together. If you cannot, there is the recreate strategy.
Recreate Strategy: Has basic rollout features and supports lifecycle hooks for injecting codes into the deployment process.
Custom Deployment Strategy: Use your own deployment strategies.
CI / CD
DevOps = Continuous Integration (CI) + Continuous Deployment (CD)
Openshift supports DevOps.
Jenkins invokes OCP master to build, which then gets the worker nodes to build images via S2I.
Build
Building is (most of the time) a process of creating runnable images from source code.
Openshift can then create containers from the image, and push the container to integrated registry.
For build strategies, there is the container build and S2I build, which creates runnable images, and custom images which creates the build image specified by the author.
Container build uses podman build command.
S2I produces ready-to-run images, and supports incremental builds. S2I builds are more flexible, faster, secure, etc and have many benefits.
With custom build, you can use your own builder image to customise the build process.
Image streams are used for build, which is similar to container registry that it presents a single view of related images. The images are identified with tags, and images can be added to associated image streams.
Service mesh:
Istio: Layer for service to service communication.
Jaeger: Scalable tracing product, with microservices observability.
Kiali: Visualise the openshift service mesh topology.
Serverless technology: Uses Serving and Eventing with Knative.
Pipelines:
Pipelines help to define the entire application lifecycle. (eg Jenkins pipeline)
Pipelines can survive unplanned restarts, can be paused to take human input, are flexible to allow fork or join and extensible with plugins.
Define pipeline workflow in Jenkinsfile.