Just to remember what I did long time ago.
First make sure you have kubectl installed.
alejandrogarcia@MacBook-Air-de-Alejandro Downloads % sudo port install kubectl-1.31
Password:
---> Computing dependencies for kubectl-1.31
The following dependencies will be installed: kubectl_select
Continue? [Y/n]: Y
---> Fetching archive for kubectl_select
---> Attempting to fetch kubectl_select-0.0.0_1.any_any.noarch.tbz2 from https://packages.macports.org/kubectl_select
---> Attempting to fetch kubectl_select-0.0.0_1.any_any.noarch.tbz2.rmd160 from https://packages.macports.org/kubectl_select
---> Installing kubectl_select @0.0.0_1
---> Activating kubectl_select @0.0.0_1
---> Cleaning kubectl_select
---> Fetching archive for kubectl-1.31
---> Attempting to fetch kubectl-1.31-1.31.1_0.darwin_20.x86_64.tbz2 from https://packages.macports.org/kubectl-1.31
---> Attempting to fetch kubectl-1.31-1.31.1_0.darwin_20.x86_64.tbz2.rmd160 from https://packages.macports.org/kubectl-1.31
---> Installing kubectl-1.31 @1.31.1_0
---> Activating kubectl-1.31 @1.31.1_0
---> Cleaning kubectl-1.31
---> Scanning binaries for linking errors
---> No broken files found.
---> No broken ports found.
---> Some of the ports you installed have notes:
kubectl-1.31 has the following notes:
To make this the default kubectl run:
sudo port select --set kubectl kubectl1.31
Make default kubectl:
alejandrogarcia@MacBook-Air-de-Alejandro Downloads % sudo port select --set kubectl kubectl1.31
Selecting 'kubectl1.31' for 'kubectl' succeeded. 'kubectl1.31' is now active.
Then, create the cluster with eksctl:
alejandrogarcia@MacBook-Air-de-Alejandro ~ % eksctl create cluster -n cluster1 --nodegroup-name ng1 --region us-east-1 --node-type t2.micro --nodes 2
2024-09-26 22:05:29 [ℹ] eksctl version 0.190.0-dev
2024-09-26 22:05:29 [ℹ] using region us-east-1
2024-09-26 22:05:30 [ℹ] setting availability zones to [us-east-1a us-east-1d]
2024-09-26 22:05:30 [ℹ] subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
2024-09-26 22:05:30 [ℹ] subnets for us-east-1d - public:192.168.32.0/19 private:192.168.96.0/19
2024-09-26 22:05:30 [ℹ] nodegroup "ng1" will use "" [AmazonLinux2/1.30]
2024-09-26 22:05:30 [ℹ] using Kubernetes version 1.30
2024-09-26 22:05:30 [ℹ] creating EKS cluster "cluster1" in "us-east-1" region with managed nodes
2024-09-26 22:05:30 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2024-09-26 22:05:30 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=cluster1'
2024-09-26 22:05:30 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "cluster1" in "us-east-1"
2024-09-26 22:05:30 [ℹ] CloudWatch logging will not be enabled for cluster "cluster1" in "us-east-1"
2024-09-26 22:05:30 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=cluster1'
2024-09-26 22:05:30 [ℹ] default addons coredns, vpc-cni, kube-proxy were not specified, will install them as EKS addons
2024-09-26 22:05:30 [ℹ]
2 sequential tasks: { create cluster control plane "cluster1",
2 sequential sub-tasks: {
2 sequential sub-tasks: {
1 task: { create addons },
wait for control plane to become ready,
},
create managed nodegroup "ng1",
}
}
2024-09-26 22:05:30 [ℹ] building cluster stack "eksctl-cluster1-cluster"
2024-09-26 22:05:33 [ℹ] deploying stack "eksctl-cluster1-cluster"
2024-09-26 22:06:03 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-cluster"
2024-09-26 22:06:33 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-cluster"
2024-09-26 22:07:34 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-cluster"
2024-09-26 22:08:35 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-cluster"
2024-09-26 22:09:36 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-cluster"
2024-09-26 22:10:37 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-cluster"
2024-09-26 22:11:38 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-cluster"
2024-09-26 22:12:39 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-cluster"
2024-09-26 22:13:40 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-cluster"
2024-09-26 22:13:45 [ℹ] creating addon
2024-09-26 22:13:45 [ℹ] successfully created addon
2024-09-26 22:13:46 [!] recommended policies were found for "vpc-cni" addon, but since OIDC is disabled on the cluster, eksctl cannot configure the requested permissions; the recommended way to provide IAM permissions for "vpc-cni" addon is via pod identity associations; after addon creation is completed, add all recommended policies to the config file, under `addon.PodIdentityAssociations`, and run `eksctl update addon`
2024-09-26 22:13:46 [ℹ] creating addon
2024-09-26 22:13:46 [ℹ] successfully created addon
2024-09-26 22:13:47 [ℹ] creating addon
2024-09-26 22:13:47 [ℹ] successfully created addon
2024-09-26 22:15:51 [ℹ] building managed nodegroup stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:15:52 [ℹ] deploying stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:15:52 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:16:23 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:17:09 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:18:29 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:18:30 [ℹ] waiting for the control plane to become ready
2024-09-26 22:18:33 [✔] saved kubeconfig as "/Users/alejandrogarcia/.kube/config"
2024-09-26 22:18:33 [ℹ] no tasks
2024-09-26 22:18:33 [✔] all EKS cluster resources for "cluster1" have been created
2024-09-26 22:18:33 [✔] created 0 nodegroup(s) in cluster "cluster1"
2024-09-26 22:18:34 [ℹ] nodegroup "ng1" has 2 node(s)
2024-09-26 22:18:34 [ℹ] node "ip-192-168-12-168.ec2.internal" is ready
2024-09-26 22:18:34 [ℹ] node "ip-192-168-54-140.ec2.internal" is ready
2024-09-26 22:18:34 [ℹ] waiting for at least 2 node(s) to become ready in "ng1"
2024-09-26 22:18:34 [ℹ] nodegroup "ng1" has 2 node(s)
2024-09-26 22:18:34 [ℹ] node "ip-192-168-12-168.ec2.internal" is ready
2024-09-26 22:18:34 [ℹ] node "ip-192-168-54-140.ec2.internal" is ready
2024-09-26 22:18:34 [✔] created 1 managed nodegroup(s) in cluster "cluster1"
2024-09-26 22:18:34 [✖] kubectl not found, v1.10.0 or newer is required
2024-09-26 22:18:34 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries
2024-09-26 22:18:34 [✔] EKS cluster "cluster1" in "us-east-1" region is ready
You can see the nodes running:
alejandrogarcia@MacBook-Air-de-Alejandro ~ % kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-12-168.ec2.internal Ready <none> 5m45s v1.30.4-eks-a737599
ip-192-168-54-140.ec2.internal Ready <none> 5m47s v1.30.4-eks-a737599
Just to test a pod in the cluster:
alejandrogarcia@MacBook-Air-de-Alejandro nginx-with-volume % kubectl apply -f 01-pod.yaml
pod/nginx-01 created
You can see the cluster creating the pod:
alejandrogarcia@MacBook-Air-de-Alejandro nginx-with-volume % kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-01 0/1 ContainerCreating 0 7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m
alejandrogarcia@MacBook-Air-de-Alejandro nginx-with-volume % kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-01 0/1 ContainerCreating 0 14s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m
To delete the cluster:
alejandrogarcia@MacBook-Air-de-Alejandro nginx-with-volume % eksctl delete cluster -n cluster1
2024-09-26 22:53:16 [ℹ] deleting EKS cluster "cluster1"
2024-09-26 22:53:18 [ℹ] will drain 0 unmanaged nodegroup(s) in cluster "cluster1"
2024-09-26 22:53:18 [ℹ] starting parallel draining, max in-flight of 1
2024-09-26 22:53:18 [✖] failed to acquire semaphore while waiting for all routines to finish: context canceled
2024-09-26 22:53:20 [ℹ] deleted 0 Fargate profile(s)
2024-09-26 22:53:21 [✔] kubeconfig has been updated
2024-09-26 22:53:21 [ℹ] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2024-09-26 22:53:26 [ℹ]
2 sequential tasks: { delete nodegroup "ng1", delete cluster control plane "cluster1" [async]
}
2024-09-26 22:53:26 [ℹ] will delete stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:53:26 [ℹ] waiting for stack "eksctl-cluster1-nodegroup-ng1" to get deleted
2024-09-26 22:53:26 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:53:57 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:54:53 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:56:14 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:58:15 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 22:59:27 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 23:00:25 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 23:01:20 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 23:02:00 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 23:02:42 [ℹ] waiting for CloudFormation stack "eksctl-cluster1-nodegroup-ng1"
2024-09-26 23:02:42 [ℹ] will delete stack "eksctl-cluster1-cluster"
2024-09-26 23:02:43 [✔] all cluster resources were deleted