Using Kubernetes via AI
- Docker Desktop
- Install Kubectl-ai
brew tap sozercan/kubectl-ai https://github.com/sozercan/kubectl-ai brew install kubectl-ai
- Get OpenAI Keys via https://platform.openai.com/account/api-keys
kubectl-ai requires an OpenAI API key or an Azure OpenAI Service API key and endpoint, and a valid Kubernetes configuration.
export OPENAI_API_KEY=<your OpenAI key>
Installing on CentOS
yum install wget wget https://github.com/sozercan/kubectl-ai/releases/download/v0.0.10/kubectl-ai_linux_amd64.tar.gz tar xvf kubectl-ai_linux_amd64.tar.gz mv kubectl-ai /usr/local/bin/kubectl-ai
Setting up Kubeview
Assuming that you have already installed Git and Helm on your laptop, follow the below steps
git clone https://github.com/benc-uk/kubeview cd kubeview/charts/ helm install kubeview kubeview
Testing it locally
kubectl port-forward svc/kubeview -n default 80:80
Deploying Pod using namespace
kubectl ai "Create a namespace called ns1 and deploy a Nginx Pod" ✨ Attempting to apply the following manifest: --- apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: ns1 spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 Use the arrow keys to navigate: ↓ ↑ → ← ? Would you like to apply this? [Reprompt/Apply/Don't Apply]: + Reprompt ▸ Apply Don't Apply
The YAML manifest you provided creates a basic Nginx pod with the name “nginx-pod” and exposes port 80. To apply this manifest and create the pod, you can use the kubectl apply command. Save the manifest in a file, for example, nginx-pod.yaml, and then execute the following command in your terminal:
Difference between “Create” and “Deploy” [Be Careful]
kubectl ai "Create a namespace called ns1 and create a Nginx Pod" ✨ Attempting to apply the following manifest: apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: v1 kind: Pod metadata: name: nginx namespace: ns1 spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 ✔ Apply
Accessing the Nginx Pod via Web Browser
kubectl port-forward nginx 8000:80 -n ns1 Forwarding from 127.0.0.1:8000 -> 80 Forwarding from [::1]:8000 -> 80 Handling connection for 8000 Handling connection for 8000
If I can access Pod directly via a web browser, why do I need deployment?
While it is possible to access a pod directly via a web browser, using a deployment provides several benefits and is generally recommended in a production environment. Deployments help manage the lifecycle of your application and provide features such as scalability, rolling updates, and fault tolerance. Here are a few reasons why using a deployment is beneficial:
Replication and Scalability: Deployments allow you to specify the desired number of replicas for your application. This means that multiple identical pods will be created, providing scalability and load balancing. If one pod fails or becomes unavailable, the deployment will automatically create a new replica to ensure that the desired number of pods is maintained.
Rolling Updates: Deployments support rolling updates, which allow you to update your application without incurring downtime. You can update the pod template in the deployment specification, and the deployment controller will manage the update process by gradually replacing old pods with new ones. This ensures a smooth transition and minimizes any impact on your application’s availability.
Versioning and Rollbacks: Deployments enable you to manage different versions of your application. If an update introduces issues or unexpected behavior, you can easily roll back to a previous version by specifying the desired revision. This helps maintain application stability and allows for quick recovery in case of problems.
Health Checks and Self-Healing: Deployments include health checks that monitor the status of pods. If a pod fails the health check, it will be automatically restarted or replaced by a new pod. This self-healing mechanism ensures the availability and reliability of your application.
Declarative Configuration: Deployments use a declarative approach, where you define the desired state of your application, and the deployment controller ensures that the current state matches the desired state. This simplifies management and allows for easier collaboration and reproducibility.
Integration with Other Kubernetes Resources: Deployments integrate well with other Kubernetes resources, such as services, ingress, and horizontal pod autoscalers. This enables you to expose your application externally, manage traffic routing, and automatically scale your application based on resource utilization or custom metrics.
While it’s possible to directly access a pod via a web browser, using a deployment provides a more robust and scalable solution for managing your application in a production environment. It offers benefits such as replication, scalability, rolling updates, versioning, self-healing, and integration with other Kubernetes resources. By leveraging deployments, you can ensure the stability, availability, and efficient management of your application throughout its lifecycle.
This is an example for deploying 3 replicas in the specific namespace
kubectl ai "create an nginx deployment with 3 replicas under namespace ns1" ✨ Attempting to apply the following manifest: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: ns1 spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: webpage image: ajeetraina/webpage ports: - containerPort: 80 ✔ Apply
kubectl ai "create an nginx deployment with 3 replicas under namespace ns1 and this time create service type as NodePort" ✨ Attempting to apply the following manifest: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: ns1 spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: webpage image: ajeetraina/webpage ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service namespace: ns1 spec: type: NodePort selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 30080 Use the arrow keys to navigate: ↓ ↑ → ← ? Would you like to apply this? [Reprompt/Apply/Don't Apply]: + Reprompt ▸ Apply Don't Apply
Listing the Kubernetes Resources
kubectl get po,deploy,svc -n ns1 NAME READY STATUS RESTARTS AGE pod/nginx-deployment-58945458f5-5pk6b 1/1 Running 0 28s pod/nginx-deployment-58945458f5-7htd7 1/1 Running 0 28s pod/nginx-deployment-58945458f5-s6cxm 1/1 Running 0 28s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deployment 3/3 3 3 28s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/nginx-service NodePort 10.100.230.251 <none> 80:30080/TCP 28s
Quick Question: Why it says
for the nginx service?
The reason the External-IP is not displaying for your service is because you are using a NodePort type service.
When you create a NodePort service in Kubernetes, it exposes your service on a specific port on each node in the cluster. However, by default, the External-IP field remains
In the case of a local development environment like Docker Desktop with Kubernetes enabled, the External-IP field will typically show
To access your service in such cases, you can use the cluster’s IP address or the NodePort. In your example, the service nginx-service is accessible on port 30080 of any node in your cluster’s IP address. You can use the cluster’s IP along with the NodePort to access your Nginx service in a web browser.
If you need an external IP for your service, you have a few options:
Use a LoadBalancer service type: If your Kubernetes cluster is running in a cloud provider that supports LoadBalancer services (such as AWS, GCP, or Azure), you can use the LoadBalancer service type. This will provision an external load balancer and assign an external IP to your service.
Use an Ingress resource: If you want to expose your service using a domain name or path-based routing, you can use an Ingress resource. Ingress allows you to define rules for routing external traffic to your service. However, note that you need an Ingress controller set up in your cluster to handle the Ingress resource.
Use a port-forwarding technique: In a local development environment, you can use port-forwarding to access your service directly from your local machine without an external IP. This allows you to forward traffic from a specific port on your local machine to the service running in the cluster. You can use the kubectl port-forward command to achieve this.
Remember that the availability of these options may depend on your Kubernetes environment and the infrastructure you are using.
Running Multiple Containers in a Pod
kubectl ai "Create a namespace ns3, create a Pod with two containers, the frist named todo-list using a Docker image ajeetraina/todo and a second container based on prom/prometheus:v2.30.3 docker image and container exposed to port 9090"
It creates a YAML file
apiVersion: v1 kind: Namespace metadata: name: ns3 --- apiVersion: v1 kind: Pod metadata: name: todo-list namespace: ns3 spec: containers: - name: todo-list image: ajeetraina/todo - name: prometheus image: prom/prometheus:v2.30.3 ports: - containerPort: 9090
kubectl port-forward todo-list 3000:3000 9090:9090 -n ns3
kubectl ai "Create a namespace ns3, create a Pod with container named todo-list using ajeetraina/todo as Docker image, create a ReplicaSet by name web, that manages 4 replicas of the container image, then replicaset should manage Pods with the label role:web"
kubectl port-forward todo-list 3000:3000 -n ns3
kubectl get po -n ns3 --selector=role=web NAME READY STATUS RESTARTS AGE web-9t6vs 1/1 Running 0 6m58s web-jbc6w 1/1 Running 0 6m58s web-k8klv 1/1 Running 0 6m58s web-rnm5x 1/1 Running 0 6m58s ajeetsraina@Q537JQXLVR ~ %