aws-cdk-lib
Version:
Version 2 of the AWS Cloud Development Kit library
1,198 lines (927 loc) • 81.3 kB
Markdown
# Amazon EKS Construct Library
This construct library allows you to define [Amazon Elastic Container Service for Kubernetes (EKS)](https://aws.amazon.com/eks/) clusters.
In addition, the library also supports defining Kubernetes resource manifests within EKS clusters.
## Table Of Contents
- [Amazon EKS Construct Library](#amazon-eks-construct-library)
- [Table Of Contents](#table-of-contents)
- [Quick Start](#quick-start)
- [Architectural Overview](#architectural-overview)
- [Provisioning clusters](#provisioning-clusters)
- [Managed node groups](#managed-node-groups)
- [Node Groups with IPv6 Support](#node-groups-with-ipv6-support)
- [Spot Instances Support](#spot-instances-support)
- [Launch Template Support](#launch-template-support)
- [Update clusters](#update-clusters)
- [Fargate profiles](#fargate-profiles)
- [Self-managed nodes](#self-managed-nodes)
- [Spot Instances](#spot-instances)
- [Bottlerocket](#bottlerocket)
- [Endpoint Access](#endpoint-access)
- [Alb Controller](#alb-controller)
- [VPC Support](#vpc-support)
- [Kubectl Handler](#kubectl-handler)
- [Cluster Handler](#cluster-handler)
- [IPv6 Support](#ipv6-support)
- [Kubectl Support](#kubectl-support)
- [Environment](#environment)
- [Runtime](#runtime)
- [Memory](#memory)
- [ARM64 Support](#arm64-support)
- [Masters Role](#masters-role)
- [Encryption](#encryption)
- [Hybrid nodes](#hybrid-nodes)
- [Permissions and Security](#permissions-and-security)
- [AWS IAM Mapping](#aws-iam-mapping)
- [Access Config](#access-config)
- [Access Entry](#access-mapping)
- [Cluster Security Group](#cluster-security-group)
- [Node SSH Access](#node-ssh-access)
- [Service Accounts](#service-accounts)
- [Pod Identities](#pod-identities)
- [Applying Kubernetes Resources](#applying-kubernetes-resources)
- [Kubernetes Manifests](#kubernetes-manifests)
- [ALB Controller Integration](#alb-controller-integration)
- [Adding resources from a URL](#adding-resources-from-a-url)
- [Dependencies](#dependencies)
- [Resource Pruning](#resource-pruning)
- [Manifests Validation](#manifests-validation)
- [Helm Charts](#helm-charts)
- [OCI Charts](#oci-charts)
- [CDK8s Charts](#cdk8s-charts)
- [Custom CDK8s Constructs](#custom-cdk8s-constructs)
- [Manually importing k8s specs and CRD's](#manually-importing-k8s-specs-and-crds)
- [Patching Kubernetes Resources](#patching-kubernetes-resources)
- [Querying Kubernetes Resources](#querying-kubernetes-resources)
- [Add-ons](#add-ons)
- [Using existing clusters](#using-existing-clusters)
- [Logging](#logging)
- [Known Issues and Limitations](#known-issues-and-limitations)
## Quick Start
This example defines an Amazon EKS cluster with the following configuration:
* Dedicated VPC with default configuration (Implicitly created using [ec2.Vpc](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ec2-readme.html#vpc))
* A Kubernetes pod with a container based on the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes) image.
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
// provisioning a cluster
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_33,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
// apply a kubernetes manifest to the cluster
cluster.addManifest('mypod', {
apiVersion: 'v1',
kind: 'Pod',
metadata: { name: 'mypod' },
spec: {
containers: [
{
name: 'hello',
image: 'paulbouwer/hello-kubernetes:1.5',
ports: [ { containerPort: 8080 } ],
},
],
},
});
```
## Architectural Overview
The following is a qualitative diagram of the various possible components involved in the cluster deployment.
```text
+-----------------------------------------------+ +-----------------+
| EKS Cluster | kubectl | |
| ----------- |<-------------+| Kubectl Handler |
| | | |
| | +-----------------+
| +--------------------+ +-----------------+ |
| | | | | |
| | Managed Node Group | | Fargate Profile | | +-----------------+
| | | | | | | |
| +--------------------+ +-----------------+ | | Cluster Handler |
| | | |
+-----------------------------------------------+ +-----------------+
^ ^ +
| | |
| connect self managed capacity | | aws-sdk
| | create/update/delete |
+ | v
+--------------------+ + +-------------------+
| | --------------+| eks.amazonaws.com |
| Auto Scaling Group | +-------------------+
| |
+--------------------+
```
In a nutshell:
* `EKS Cluster` - The cluster endpoint created by EKS.
* `Managed Node Group` - EC2 worker nodes managed by EKS.
* `Fargate Profile` - Fargate worker nodes managed by EKS.
* `Auto Scaling Group` - EC2 worker nodes managed by the user.
* `KubectlHandler` - Lambda function for invoking `kubectl` commands on the cluster - created by CDK.
* `ClusterHandler` - Lambda function for interacting with EKS API to manage the cluster lifecycle - created by CDK.
A more detailed breakdown of each is provided further down this README.
## Provisioning clusters
Creating a new cluster is done using the `Cluster` or `FargateCluster` constructs. The only required properties are the kubernetes `version` and `kubectlLayer`.
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_33,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
You can also use `FargateCluster` to provision a cluster that uses only fargate workers.
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
new eks.FargateCluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_33,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
> **NOTE: Only 1 cluster per stack is supported.** If you have a use-case for multiple clusters per stack, or would like to understand more about this limitation, see <https://github.com/aws/aws-cdk/issues/10073>.
Below you'll find a few important cluster configuration options. First of which is Capacity.
Capacity is the amount and the type of worker nodes that are available to the cluster for deploying resources. Amazon EKS offers 3 ways of configuring capacity, which you can combine as you like:
### Managed node groups
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
With Amazon EKS managed node groups, you don't need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
> For more details visit [Amazon EKS Managed Node Groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html).
**Managed Node Groups are the recommended way to allocate cluster capacity.**
By default, this library will allocate a managed node group with 2 *m5.large* instances (this instance type suits most common use-cases, and is good value for money).
At cluster instantiation time, you can customize the number of instances and their type:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_33,
defaultCapacity: 5,
defaultCapacityInstance: ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.SMALL),
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
To access the node group that was created on your behalf, you can use `cluster.defaultNodegroup`.
Additional customizations are available post instantiation. To apply them, set the default capacity to 0, and use the `cluster.addNodegroupCapacity` method:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
const cluster = new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_33,
defaultCapacity: 0,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
cluster.addNodegroupCapacity('custom-node-group', {
instanceTypes: [new ec2.InstanceType('m5.large')],
minSize: 4,
diskSize: 100,
});
```
To set node taints, you can set `taints` option.
```ts
declare const cluster: eks.Cluster;
cluster.addNodegroupCapacity('custom-node-group', {
instanceTypes: [new ec2.InstanceType('m5.large')],
taints: [
{
effect: eks.TaintEffect.NO_SCHEDULE,
key: 'foo',
value: 'bar',
},
],
});
```
To define the type of the AMI for the node group, you may explicitly define `amiType` according to your requirements, supported amiType could be found [HERE](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks.NodegroupAmiType.html).
```ts
declare const cluster: eks.Cluster;
// X86_64 based AMI managed node group
cluster.addNodegroupCapacity('custom-node-group', {
instanceTypes: [new ec2.InstanceType('m5.large')], // NOTE: if amiType is x86_64-based image, the instance types here must be x86_64-based.
amiType: eks.NodegroupAmiType.AL2023_X86_64_STANDARD,
});
// ARM_64 based AMI managed node group
cluster.addNodegroupCapacity('custom-node-group', {
instanceTypes: [new ec2.InstanceType('m6g.medium')], // NOTE: if amiType is ARM-based image, the instance types here must be ARM-based.
amiType: eks.NodegroupAmiType.AL2023_ARM_64_STANDARD,
});
```
To define the maximum number of instances which can be simultaneously replaced in a node group during a version update you can set `maxUnavailable` or `maxUnavailablePercentage` options.
> For more details visit [Updating a managed node group](https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html)
```ts
declare const cluster: eks.Cluster;
cluster.addNodegroupCapacity('custom-node-group', {
instanceTypes: [new ec2.InstanceType('m5.large')],
maxSize: 5,
maxUnavailable: 2,
});
```
```ts
declare const cluster: eks.Cluster;
cluster.addNodegroupCapacity('custom-node-group', {
instanceTypes: [new ec2.InstanceType('m5.large')],
maxUnavailablePercentage: 33,
});
```
> **NOTE:** If you add instances with the inferentia class (`inf1` or `inf2`) or trainium class (`trn1` or `trn1n`)
> the [neuron plugin](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/containers/dlc-then-eks-devflow.html)
> will be automatically installed in the kubernetes cluster.
#### Node Groups with IPv6 Support
Node groups are available with IPv6 configured networks. For custom roles assigned to node groups additional permissions are necessary in order for pods to obtain an IPv6 address. The default node role will include these permissions.
> For more details visit [Configuring the Amazon VPC CNI plugin for Kubernetes to use IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html#cni-iam-role-create-role)
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
const ipv6Management = new iam.PolicyDocument({
statements: [new iam.PolicyStatement({
resources: ['arn:aws:ec2:*:*:network-interface/*'],
actions: [
'ec2:AssignIpv6Addresses',
'ec2:UnassignIpv6Addresses',
],
})],
});
const eksClusterNodeGroupRole = new iam.Role(this, 'eksClusterNodeGroupRole', {
roleName: 'eksClusterNodeGroupRole',
assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
managedPolicies: [
iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSWorkerNodePolicy'),
iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEC2ContainerRegistryReadOnly'),
iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKS_CNI_Policy'),
],
inlinePolicies: {
ipv6Management,
},
});
const cluster = new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_33,
defaultCapacity: 0,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
cluster.addNodegroupCapacity('custom-node-group', {
instanceTypes: [new ec2.InstanceType('m5.large')],
minSize: 2,
diskSize: 100,
nodeRole: eksClusterNodeGroupRole,
});
```
#### Spot Instances Support
Use `capacityType` to create managed node groups comprised of spot instances. To maximize the availability of your applications while using
Spot Instances, we recommend that you configure a Spot managed node group to use multiple instance types with the `instanceTypes` property.
> For more details visit [Managed node group capacity types](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html#managed-node-group-capacity-types).
```ts
declare const cluster: eks.Cluster;
cluster.addNodegroupCapacity('extra-ng-spot', {
instanceTypes: [
new ec2.InstanceType('c5.large'),
new ec2.InstanceType('c5a.large'),
new ec2.InstanceType('c5d.large'),
],
minSize: 3,
capacityType: eks.CapacityType.SPOT,
});
```
#### Launch Template Support
You can specify a launch template that the node group will use. For example, this can be useful if you want to use
a custom AMI or add custom user data.
When supplying a custom user data script, it must be encoded in the MIME multi-part archive format, since Amazon EKS merges with its own user data. Visit the [Launch Template Docs](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-user-data)
for mode details.
```ts
declare const cluster: eks.Cluster;
const userData = `MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
echo "Running custom user data script"
--==MYBOUNDARY==--\\
`;
const lt = new ec2.CfnLaunchTemplate(this, 'LaunchTemplate', {
launchTemplateData: {
instanceType: 't3.small',
userData: Fn.base64(userData),
},
});
cluster.addNodegroupCapacity('extra-ng', {
launchTemplateSpec: {
id: lt.ref,
version: lt.attrLatestVersionNumber,
},
});
```
Note that when using a custom AMI, Amazon EKS doesn't merge any user data. Which means you do not need the multi-part encoding. and are responsible for supplying the required bootstrap commands for nodes to join the cluster.
In the following example, `/ect/eks/bootstrap.sh` from the AMI will be used to bootstrap the node.
```ts
declare const cluster: eks.Cluster;
const userData = ec2.UserData.forLinux();
userData.addCommands(
'set -o xtrace',
`/etc/eks/bootstrap.sh ${cluster.clusterName}`,
);
const lt = new ec2.CfnLaunchTemplate(this, 'LaunchTemplate', {
launchTemplateData: {
imageId: 'some-ami-id', // custom AMI
instanceType: 't3.small',
userData: Fn.base64(userData.render()),
},
});
cluster.addNodegroupCapacity('extra-ng', {
launchTemplateSpec: {
id: lt.ref,
version: lt.attrLatestVersionNumber,
},
});
```
You may specify one `instanceType` in the launch template or multiple `instanceTypes` in the node group, **but not both**.
> For more details visit [Launch Template Support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html).
Graviton 2 instance types are supported including `c6g`, `m6g`, `r6g` and `t4g`.
Graviton 3 instance types are supported including `c7g`.
### Update clusters
When you rename the cluster name and redeploy the stack, the cluster replacement will be triggered and
the existing one will be deleted after the new one is provisioned. As the cluster resource ARN has been changed,
the cluster resource handler would not be able to delete the old one as the resource ARN in the IAM policy
has been changed. As a workaround, you need to add a temporary policy to the cluster admin role for
successful replacement. Consider this example if you are renaming the cluster from `foo` to `bar`:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
const cluster = new eks.Cluster(this, 'cluster-to-rename', {
clusterName: 'foo', // rename this to 'bar'
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
version: eks.KubernetesVersion.V1_33,
});
// allow the cluster admin role to delete the cluster 'foo'
cluster.adminRole.addToPolicy(new iam.PolicyStatement({
actions: [
'eks:DeleteCluster',
'eks:DescribeCluster',
],
resources: [
Stack.of(this).formatArn({ service: 'eks', resource: 'cluster', resourceName: 'foo' }),
]
}))
```
### Fargate profiles
AWS Fargate is a technology that provides on-demand, right-sized compute
capacity for containers. With AWS Fargate, you no longer have to provision,
configure, or scale groups of virtual machines to run containers. This removes
the need to choose server types, decide when to scale your node groups, or
optimize cluster packing.
You can control which pods start on Fargate and how they run with Fargate
Profiles, which are defined as part of your Amazon EKS cluster.
See [Fargate Considerations](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html#fargate-considerations) in the AWS EKS User Guide.
You can add Fargate Profiles to any EKS cluster defined in your CDK app
through the `addFargateProfile()` method. The following example adds a profile
that will match all pods from the "default" namespace:
```ts
declare const cluster: eks.Cluster;
cluster.addFargateProfile('MyProfile', {
selectors: [ { namespace: 'default' } ],
});
```
You can also directly use the `FargateProfile` construct to create profiles under different scopes:
```ts
declare const cluster: eks.Cluster;
new eks.FargateProfile(this, 'MyProfile', {
cluster,
selectors: [ { namespace: 'default' } ],
});
```
To create an EKS cluster that **only** uses Fargate capacity, you can use `FargateCluster`.
The following code defines an Amazon EKS cluster with a default Fargate Profile that matches all pods from the "kube-system" and "default" namespaces. It is also configured to [run CoreDNS on Fargate](https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-coredns).
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
const cluster = new eks.FargateCluster(this, 'MyCluster', {
version: eks.KubernetesVersion.V1_33,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
`FargateCluster` will create a default `FargateProfile` which can be accessed via the cluster's `defaultProfile` property. The created profile can also be customized by passing options as with `addFargateProfile`.
**NOTE**: Classic Load Balancers and Network Load Balancers are not supported on
pods running on Fargate. For ingress, we recommend that you use the [ALB Ingress
Controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html)
on Amazon EKS (minimum version v1.1.4).
### Self-managed nodes
Another way of allocating capacity to an EKS cluster is by using self-managed nodes.
EC2 instances that are part of the auto-scaling group will serve as worker nodes for the cluster.
This type of capacity is also commonly referred to as *EC2 Capacity** or *EC2 Nodes*.
For a detailed overview please visit [Self Managed Nodes](https://docs.aws.amazon.com/eks/latest/userguide/worker.html).
Creating an auto-scaling group and connecting it to the cluster is done using the `cluster.addAutoScalingGroupCapacity` method:
```ts
declare const cluster: eks.Cluster;
cluster.addAutoScalingGroupCapacity('frontend-nodes', {
instanceType: new ec2.InstanceType('t2.medium'),
minCapacity: 3,
vpcSubnets: { subnetType: ec2.SubnetType.PUBLIC },
});
```
To connect an already initialized auto-scaling group, use the `cluster.connectAutoScalingGroupCapacity()` method:
```ts
declare const cluster: eks.Cluster;
declare const asg: autoscaling.AutoScalingGroup;
cluster.connectAutoScalingGroupCapacity(asg, {});
```
To connect a self-managed node group to an imported cluster, use the `cluster.connectAutoScalingGroupCapacity()` method:
```ts
declare const cluster: eks.Cluster;
declare const asg: autoscaling.AutoScalingGroup;
const importedCluster = eks.Cluster.fromClusterAttributes(this, 'ImportedCluster', {
clusterName: cluster.clusterName,
clusterSecurityGroupId: cluster.clusterSecurityGroupId,
});
importedCluster.connectAutoScalingGroupCapacity(asg, {});
```
In both cases, the [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html#cluster-sg) will be automatically attached to
the auto-scaling group, allowing for traffic to flow freely between managed and self-managed nodes.
> **Note:** The default `updateType` for auto-scaling groups does not replace existing nodes. Since security groups are determined at launch time, self-managed nodes that were provisioned with version `1.78.0` or lower, will not be updated.
> To apply the new configuration on all your self-managed nodes, you'll need to replace the nodes using the `UpdateType.REPLACING_UPDATE` policy for the [`updateType`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-autoscaling.AutoScalingGroup.html#updatetypespan-classapi-icon-api-icon-deprecated-titlethis-api-element-is-deprecated-its-use-is-not-recommended%EF%B8%8Fspan) property.
You can customize the [/etc/eks/boostrap.sh](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh) script, which is responsible
for bootstrapping the node to the EKS cluster. For example, you can use `kubeletExtraArgs` to add custom node labels or taints.
```ts
declare const cluster: eks.Cluster;
cluster.addAutoScalingGroupCapacity('spot', {
instanceType: new ec2.InstanceType('t3.large'),
minCapacity: 2,
bootstrapOptions: {
kubeletExtraArgs: '--node-labels foo=bar,goo=far',
awsApiRetryAttempts: 5,
},
});
```
To disable bootstrapping altogether (i.e. to fully customize user-data), set `bootstrapEnabled` to `false`.
You can also configure the cluster to use an auto-scaling group as the default capacity:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
const cluster = new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_33,
defaultCapacityType: eks.DefaultCapacityType.EC2,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
This will allocate an auto-scaling group with 2 *m5.large* instances (this instance type suits most common use-cases, and is good value for money).
To access the `AutoScalingGroup` that was created on your behalf, you can use `cluster.defaultCapacity`.
You can also independently create an `AutoScalingGroup` and connect it to the cluster using the `cluster.connectAutoScalingGroupCapacity` method:
```ts
declare const cluster: eks.Cluster;
declare const asg: autoscaling.AutoScalingGroup;
cluster.connectAutoScalingGroupCapacity(asg, {});
```
This will add the necessary user-data to access the apiserver and configure all connections, roles, and tags needed for the instances in the auto-scaling group to properly join the cluster.
#### Spot Instances
When using self-managed nodes, you can configure the capacity to use spot instances, greatly reducing capacity cost.
To enable spot capacity, use the `spotPrice` property:
```ts
declare const cluster: eks.Cluster;
cluster.addAutoScalingGroupCapacity('spot', {
spotPrice: '0.1094',
instanceType: new ec2.InstanceType('t3.large'),
maxCapacity: 10,
});
```
> Spot instance nodes will be labeled with `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.
The [AWS Node Termination Handler](https://github.com/aws/aws-node-termination-handler) `DaemonSet` will be
installed from [Amazon EKS Helm chart repository](https://github.com/aws/eks-charts/tree/master/stable/aws-node-termination-handler) on these nodes.
The termination handler ensures that the Kubernetes control plane responds appropriately to events that
can cause your EC2 instance to become unavailable, such as [EC2 maintenance events](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html)
and [EC2 Spot interruptions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html) and helps gracefully stop all pods running on spot nodes that are about to be
terminated.
> Handler Version: [1.7.0](https://github.com/aws/aws-node-termination-handler/releases/tag/v1.7.0)
>
> Chart Version: [0.9.5](https://github.com/aws/eks-charts/blob/v0.0.28/stable/aws-node-termination-handler/Chart.yaml)
To disable the installation of the termination handler, set the `spotInterruptHandler` property to `false`. This applies both to `addAutoScalingGroupCapacity` and `connectAutoScalingGroupCapacity`.
#### Bottlerocket
[Bottlerocket](https://aws.amazon.com/bottlerocket/) is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts.
`Bottlerocket` is supported when using managed nodegroups or self-managed auto-scaling groups.
To create a Bottlerocket managed nodegroup:
```ts
declare const cluster: eks.Cluster;
cluster.addNodegroupCapacity('BottlerocketNG', {
amiType: eks.NodegroupAmiType.BOTTLEROCKET_X86_64,
});
```
The following example will create an auto-scaling group of 2 `t3.small` Linux instances running with the `Bottlerocket` AMI.
```ts
declare const cluster: eks.Cluster;
cluster.addAutoScalingGroupCapacity('BottlerocketNodes', {
instanceType: new ec2.InstanceType('t3.small'),
minCapacity: 2,
machineImageType: eks.MachineImageType.BOTTLEROCKET,
});
```
The specific Bottlerocket AMI variant will be auto selected according to the k8s version for the `x86_64` architecture.
For example, if the Amazon EKS cluster version is `1.17`, the Bottlerocket AMI variant will be auto selected as
`aws-k8s-1.17` behind the scene.
> See [Variants](https://github.com/bottlerocket-os/bottlerocket/blob/develop/README.md#variants) for more details.
Please note Bottlerocket does not allow to customize bootstrap options and `bootstrapOptions` properties is not supported when you create the `Bottlerocket` capacity.
To create a Bottlerocket managed nodegroup with Nvidia-based EC2 instance types use the `BOTTLEROCKET_X86_64_NVIDIA` or
`BOTTLEROCKET_ARM_64_NVIDIA` AMIs:
```ts
declare const cluster: eks.Cluster;
cluster.addNodegroupCapacity('BottlerocketNvidiaNG', {
amiType: eks.NodegroupAmiType.BOTTLEROCKET_X86_64_NVIDIA,
instanceTypes: [new ec2.InstanceType('g4dn.xlarge')],
});
```
For more details about Bottlerocket, see [Bottlerocket FAQs](https://aws.amazon.com/bottlerocket/faqs/) and [Bottlerocket Open Source Blog](https://aws.amazon.com/blogs/opensource/announcing-the-general-availability-of-bottlerocket-an-open-source-linux-distribution-purpose-built-to-run-containers/).
### Endpoint Access
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as `kubectl`)
By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of
AWS Identity and Access Management (IAM) and native Kubernetes [Role Based Access Control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (RBAC).
You can configure the [cluster endpoint access](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) by using the `endpointAccess` property:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_33,
endpointAccess: eks.EndpointAccess.PRIVATE, // No access outside of your VPC.
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
The default value is `eks.EndpointAccess.PUBLIC_AND_PRIVATE`. Which means the cluster endpoint is accessible from outside of your VPC, but worker node traffic and `kubectl` commands issued by this library stay within your VPC.
### Alb Controller
Some Kubernetes resources are commonly implemented on AWS with the help of the [ALB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/).
From the docs:
> AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
>
> * It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.
> * It satisfies Kubernetes Service resources by provisioning Network Load Balancers.
To deploy the controller on your EKS cluster, configure the `albController` property:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_33,
albController: {
version: eks.AlbControllerVersion.V2_8_2,
},
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
To provide additional Helm chart values supported by `albController` in CDK, use the `additionalHelmChartValues` property. For example, the following code snippet shows how to set the `enableWafV2` flag:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_33,
albController: {
version: eks.AlbControllerVersion.V2_8_2,
additionalHelmChartValues: {
enableWafv2: false
}
},
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
The `albController` requires `defaultCapacity` or at least one nodegroup. If there's no `defaultCapacity` or available
nodegroup for the cluster, the `albController` deployment would fail.
Querying the controller pods should look something like this:
```console
❯ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-76bd6c7586-d929p 1/1 Running 0 109m
aws-load-balancer-controller-76bd6c7586-fqxph 1/1 Running 0 109m
...
...
```
Every Kubernetes manifest that utilizes the ALB Controller is effectively dependant on the controller.
If the controller is deleted before the manifest, it might result in dangling ELB/ALB resources.
Currently, the EKS construct library does not detect such dependencies, and they should be done explicitly.
For example:
```ts
declare const cluster: eks.Cluster;
const manifest = cluster.addManifest('manifest', {/* ... */});
if (cluster.albController) {
manifest.node.addDependency(cluster.albController);
}
```
### VPC Support
You can specify the VPC of the cluster using the `vpc` and `vpcSubnets` properties:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
declare const vpc: ec2.Vpc;
new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_33,
vpc,
vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }],
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
> Note: Isolated VPCs (i.e with no internet access) are not fully supported. See https://github.com/aws/aws-cdk/issues/12171. Check out [this aws-cdk-example](https://github.com/aws-samples/aws-cdk-examples/tree/master/java/eks/private-cluster) for reference.
If you do not specify a VPC, one will be created on your behalf, which you can then access via `cluster.vpc`. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles).
Please note that the `vpcSubnets` property defines the subnets where EKS will place the _control plane_ ENIs. To choose
the subnets where EKS will place the worker nodes, please refer to the **Provisioning clusters** section above.
If you allocate self managed capacity, you can specify which subnets should the auto-scaling group use:
```ts
declare const vpc: ec2.Vpc;
declare const cluster: eks.Cluster;
cluster.addAutoScalingGroupCapacity('nodes', {
vpcSubnets: { subnets: vpc.privateSubnets },
instanceType: new ec2.InstanceType('t2.medium'),
});
```
There are two additional components you might want to provision within the VPC.
#### Kubectl Handler
The `KubectlHandler` is a Lambda function responsible to issuing `kubectl` and `helm` commands against the cluster when you add resource manifests to the cluster.
The handler association to the VPC is derived from the `endpointAccess` configuration. The rule of thumb is: *If the cluster VPC can be associated, it will be*.
Breaking this down, it means that if the endpoint exposes private access (via `EndpointAccess.PRIVATE` or `EndpointAccess.PUBLIC_AND_PRIVATE`), and the VPC contains **private** subnets, the Lambda function will be provisioned inside the VPC and use the private subnets to interact with the cluster. This is the common use-case.
If the endpoint does not expose private access (via `EndpointAccess.PUBLIC`) **or** the VPC does not contain private subnets, the function will not be provisioned within the VPC.
If your use-case requires control over the IAM role that the KubeCtl Handler assumes, a custom role can be passed through the ClusterProps (as `kubectlLambdaRole`) of the EKS Cluster construct.
#### Cluster Handler
The `ClusterHandler` is a set of Lambda functions (`onEventHandler`, `isCompleteHandler`) responsible for interacting with the EKS API in order to control the cluster lifecycle. To provision these functions inside the VPC, set the `placeClusterHandlerInVpc` property to `true`. This will place the functions inside the private subnets of the VPC based on the selection strategy specified in the [`vpcSubnets`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-eks.Cluster.html#vpcsubnetsspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan) property.
You can configure the environment of the Cluster Handler functions by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
declare const proxyInstanceSecurityGroup: ec2.SecurityGroup;
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_33,
clusterHandlerEnvironment: {
https_proxy: 'http://proxy.myproxy.com',
},
/**
* If the proxy is not open publicly, you can pass a security group to the
* Cluster Handler Lambdas so that it can reach the proxy.
*/
clusterHandlerSecurityGroup: proxyInstanceSecurityGroup,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
### IPv6 Support
You can optionally choose to configure your cluster to use IPv6 using the [`ipFamily`](https://docs.aws.amazon.com/eks/latest/APIReference/API_KubernetesNetworkConfigRequest.html#AmazonEKS-Type-KubernetesNetworkConfigRequest-ipFamily) definition for your cluster. Note that this will require the underlying subnets to have an associated IPv6 CIDR.
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
declare const vpc: ec2.Vpc;
function associateSubnetWithV6Cidr(vpc: ec2.Vpc, count: number, subnet: ec2.ISubnet) {
const cfnSubnet = subnet.node.defaultChild as ec2.CfnSubnet;
cfnSubnet.ipv6CidrBlock = Fn.select(count, Fn.cidr(Fn.select(0, vpc.vpcIpv6CidrBlocks), 256, (128 - 64).toString()));
cfnSubnet.assignIpv6AddressOnCreation = true;
}
// make an ipv6 cidr
const ipv6cidr = new ec2.CfnVPCCidrBlock(this, 'CIDR6', {
vpcId: vpc.vpcId,
amazonProvidedIpv6CidrBlock: true,
});
// connect the ipv6 cidr to all vpc subnets
let subnetcount = 0;
const subnets = vpc.publicSubnets.concat(vpc.privateSubnets);
for (let subnet of subnets) {
// Wait for the ipv6 cidr to complete
subnet.node.addDependency(ipv6cidr);
associateSubnetWithV6Cidr(vpc, subnetcount, subnet);
subnetcount = subnetcount + 1;
}
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_33,
vpc: vpc,
ipFamily: eks.IpFamily.IP_V6,
vpcSubnets: [{ subnets: vpc.publicSubnets }],
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
### Kubectl Support
The resources are created in the cluster by running `kubectl apply` from a python lambda function.
By default, CDK will create a new python lambda function to apply your k8s manifests. If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:
```ts
const handlerRole = iam.Role.fromRoleArn(this, 'HandlerRole', 'arn:aws:iam::123456789012:role/lambda-role');
// get the serviceToken from the custom resource provider
const functionArn = lambda.Function.fromFunctionName(this, 'ProviderOnEventFunc', 'ProviderframeworkonEvent-XXX').functionArn;
const kubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(this, 'KubectlProvider', {
functionArn,
kubectlRoleArn: 'arn:aws:iam::123456789012:role/kubectl-role',
handlerRole,
});
const cluster = eks.Cluster.fromClusterAttributes(this, 'Cluster', {
clusterName: 'cluster',
kubectlProvider,
});
```
#### Environment
You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_33,
kubectlEnvironment: {
'http_proxy': 'http://proxy.myproxy.com',
},
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
#### Runtime
The kubectl handler uses `kubectl`, `helm` and the `aws` CLI in order to
interact with the cluster. These are bundled into AWS Lambda layers included in
the `@aws-cdk/lambda-layer-awscli` and `@aws-cdk/lambda-layer-kubectl` modules.
The version of kubectl used must be compatible with the Kubernetes version of the
cluster. kubectl is supported within one minor version (older or newer) of Kubernetes
(see [Kubernetes version skew policy](https://kubernetes.io/releases/version-skew-policy/#kubectl)).
Depending on which version of kubernetes you're targeting, you will need to use one of
the `@aws-cdk/lambda-layer-kubectl-vXY` packages.
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_33,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
You can also specify a custom `lambda.LayerVersion` if you wish to use a
different version of these tools, or a version not available in any of the
`@aws-cdk/lambda-layer-kubectl-vXY` packages. The handler expects the layer to
include the following two executables:
```text
helm/helm
kubectl/kubectl
```
See more information in the
[Dockerfile](https://github.com/aws/aws-cdk/tree/main/packages/%40aws-cdk/lambda-layer-awscli/layer) for @aws-cdk/lambda-layer-awscli
and the
[Dockerfile](https://github.com/aws/aws-cdk/tree/main/packages/%40aws-cdk/lambda-layer-kubectl/layer) for @aws-cdk/lambda-layer-kubectl.
```ts
const layer = new lambda.LayerVersion(this, 'KubectlLayer', {
code: lambda.Code.fromAsset('layer.zip'),
});
```
Now specify when the cluster is defined:
```ts
declare const layer: lambda.LayerVersion;
declare const vpc: ec2.Vpc;
const cluster1 = new eks.Cluster(this, 'MyCluster', {
kubectlLayer: layer,
vpc,
clusterName: 'cluster-name',
version: eks.KubernetesVersion.V1_33,
});
// or
const cluster2 = eks.Cluster.fromClusterAttributes(this, 'MyCluster', {
kubectlLayer: layer,
vpc,
clusterName: 'cluster-name',
});
```
#### Memory
By default, the kubectl provider is configured with 1024MiB of memory. You can use the `kubectlMemory` option to specify the memory size for the AWS Lambda function:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
new eks.Cluster(this, 'MyCluster', {
kubectlMemory: Size.gibibytes(4),
version: eks.KubernetesVersion.V1_33,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
// or
declare const vpc: ec2.Vpc;
eks.Cluster.fromClusterAttributes(this, 'MyCluster', {
kubectlMemory: Size.gibibytes(4),
vpc,
clusterName: 'cluster-name',
});
```
### ARM64 Support
Instance types with `ARM64` architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 `instanceType` (such as `m6g.medium`), and the latest
Amazon Linux 2 AMI for ARM64 will be automatically selected.
```ts
declare const cluster: eks.Cluster;
// add a managed ARM64 nodegroup
cluster.addNodegroupCapacity('extra-ng-arm', {
instanceTypes: [new ec2.InstanceType('m6g.medium')],
minSize: 2,
});
// add a self-managed ARM64 nodegroup
cluster.addAutoScalingGroupCapacity('self-ng-arm', {
instanceType: new ec2.InstanceType('m6g.medium'),
minCapacity: 2,
})
```
### Masters Role
When you create a cluster, you can specify a `mastersRole`. The `Cluster` construct will associate this role with the `system:masters` [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) group, giving it super-user access to the cluster.
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
declare const role: iam.Role;
new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_33,
mastersRole: role,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
In order to interact with your cluster through `kubectl`, you can use the `aws eks update-kubeconfig` [AWS CLI command](https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html)
to configure your local kubeconfig. The EKS module will define a CloudFormation output in your stack which contains the command to run. For example:
```plaintext
Outputs:
ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
```
Execute the `aws eks update-kubeconfig ...` command in your terminal to create or update a local kubeconfig context:
```console
$ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
Added new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config
```
And now you can simply use `kubectl`:
```console
$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/aws-node-fpmwv 1/1 Running 0 21m
pod/aws-node-m9htf 1/1 Running 0 21m
pod/coredns-5cb4fb54c7-q222j 1/1 Running 0 23m
pod/coredns-5cb4fb54c7-v9nxx 1/1 Running 0 23m
...
```
If you do not specify it, you won't have access to the cluster from outside of the CDK application.
> Note that `cluster.addManifest` and `new KubernetesManifest` will still work.
### Encryption
When you create an Amazon EKS cluster, envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS) can be enabled.
The documentation on [creating a cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
can provide more details about the customer master key (CMK) that can be used for the encryption.
You can use the `secretsEncryptionKey` to configure which key the cluster will use to encrypt Kubernetes secrets. By default, an AWS Managed key will be used.
> This setting can only be specified when the cluster is created and cannot be updated.
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
const secretsKey = new kms.Key(this, 'SecretsKey');
const cluster = new eks.Cluster(this, 'MyCluster', {
secretsEncryptionKey: secretsKey,
version: eks.KubernetesVersion.V1_33,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
You can also use a similar configuration for running a cluster built using the FargateCluster construct.
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
const secretsKey = new kms.Key(this, 'SecretsKey');
const cluster = new eks.FargateCluster(this, 'MyFargateCluster', {
secretsEncryptionKey: secretsKey,
version: eks.KubernetesVersion.V1_33,
kubectlLayer: new KubectlV33Layer(this, 'kubectl'),
});
```
The Amazon Resource Name (ARN) for that CMK can be retrieved.
```ts
declare const cluster: eks.Cluster;
const clusterEncryptionConfigKeyArn = cluster.clusterEncryptionConfigKeyArn;
```
### Hybrid Nodes
When you create an Amazon EKS cluster, you can configure it to leverage the [EKS Hybrid Nodes](https://aws.amazon.com/eks/hybrid-nodes/) feature, allowing you to use your on-premises and edge infrastructure as nodes in your EKS cluster. Refer to the Hyrid Nodes [networking documentation](https://docs.aws.amazon.com/eks/latest/userguide/hybrid-nodes-networking.html) to configure your on-premises network, node and pod CIDRs, access control, etc before creating your EKS Cluster.
Once you have identified the on-premises node and pod (optional) CIDRs you will use for your hybrid nodes and the workloads running on them, you can specify them during cluster creation using the `remoteNodeNetworks` and `remotePodNetworks` (optional) properties:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
new eks.Cluster(this, 'Cluster', {
version: eks.KubernetesVersion.V1_33,
kubectlLayer: new KubectlV33Layer(this, 'KubectlLayer'),
remoteNodeNetworks: [
{
cidrs: ['10.0.0.0/16'],
},
],
remotePodNetworks: [
{
cidrs: ['192.168.0.0/16'],
},
],
});
```
### Self-Managed Add-ons
Amazon EKS automatically installs self-managed add-ons such as the Amazon VPC CNI plugin for Kubernetes, kube-proxy, and CoreDNS for every cluster. You can change the default configuration of the add-ons and update them when desired. If you wish to create a cluster without the default add-ons, set `bootstrapSelfManagedAddons` as `false`. When this is set to false, make sure to install the necessary alternatives which provide functionality that enables pod and service operations for your EKS cluster.
> Changing the value of `bootstrapSelfManagedAddons` after the EKS cluster creation will result in a replacement of the cluster.
## Permissions and Security
Amazon EKS provides several mechanism of securing the cluster and granting permissions to specific IAM users and roles.
### AWS IAM Mapping
As described in the [Amazon EKS User Guide](https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html), you can map AWS IAM users and roles to [Kubernetes Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac).
The Amazon EKS construct manages the *aws-auth* `ConfigMap` Kubernetes resource on your behalf and exposes an API through the `cluster.awsAuth` for mapping
users, roles and accounts.
Furthermore, when auto-scaling group capacity is added to the cluster, the IAM instance role of the auto-scaling group will be automatically mapped to RBAC so nodes can connect to the cluster. No manual mapping is required.
For example, let's say you want to grant an IAM user administrative privileges on your cluster:
```ts
declare const cluster: eks.Cluster;
const adminUser = new iam.User(this, 'Admin');
cluster.awsAuth.addUserMapping(adminUser, { groups: [ 'system:masters' ]});
```
A convenience method for mapping a role to the `system:masters` group is also available:
```ts
declare const cluster: eks.Cluster;
declare const role: iam.Role;
cluster.awsAuth.addMastersRole(role);
```
To access the Kubernetes resources from the console, make sure your viewing principal is defined
in the `aws-auth` ConfigMap. Some options to consider:
```ts
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
declare const cluster: eks.Cluster;
declare const your_current_role: iam.Role;
declare const vpc: ec2.Vpc;
// Option 1: Add your current assumed IAM role to system:masters. Make sure to add relevant policies.
cluster.awsAuth.addMastersRole(your_current_role);
your_current_role.addToPolicy(new iam.PolicyStatement({
actions: [
'eks:AccessKubernetesApi',
'eks:Describe*',
'eks:List*',
],
resources: [ cluster.clusterArn ],
}));
```
```ts
// Option 2: create your custom mastersRole with scoped assumeBy arn as the Cluster prop. Switch to this role from the AWS console.
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
declare const vpc: ec2.Vpc;
const mastersRole = new iam.Role(this, 'MastersRole', {
assumedBy: new iam.ArnPrincipal('arn_for_trusted_principal'),
});
const cluster = new eks.Cluster(this, 'EksCluster', {
vpc,
version: eks.KubernetesVersion.V1_33,
kubectlLayer: new KubectlV33Layer(this, 'KubectlLayer'),
mastersRole,
});
mastersRole.addToPolicy(new iam.PolicyStatement({
actions: [
'eks:AccessKubernetesApi',
'eks:Describe*',
'eks:List*',
],
resources: [ cluster.clusterArn ],
}));
```
```ts
// Option 3: Create a new role that allows the account root principal to assume. Add this role in the `system:masters` and witch to this role from the AWS console.
declare const cluster: eks.Cluster;
const consoleReadOnlyRole = new iam.Role(this, 'ConsoleReadOnlyRole', {
assumedBy