我一直在尝试在VPC中切换到IPv6,以节省与IPv4使用相关的成本。我的设置包括EKS和RDS Aurora,我正在使用Terraform配置所有内容。
但是,当我尝试为EKS创建具有公共和私有子网的纯IPv6 VPC时,我遇到了以下错误:
"At least one subnet in each AZ should have 2 free IPs. Invalid AZs: { [eu-central-1a, eu-central-1b] }, provided subnets: { subnet-06a43f*, subnet-05350*}"
另一方面,如果我为EKS设置双栈IPv6子网,则NAT网关需要IPv4。但是,当我尝试在没有IPv4 NAT网关的情况下部署EKS时,我收到了以下错误:
"Error: waiting for EKS Node Group (-eks-cluster:-eks-workers) to be created: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. Last error: i-0bb3*: NodeCreationFailure: Instances failed to join the Kubernetes cluster."
似乎让它工作的唯一方法是启用使用IPv4的NAT网关,不幸的是,这违背了我通过切换到IPv6来降低成本的目标。
还有其他人经历过吗?关于如何在不遇到这些问题的情况下有效地过渡到IPv6,有什么建议吗?
module "vpc_and_subnets" {
source = "terraform-aws-modules/vpc/aws"
version = "5.13.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 3, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 3, k + length(local.azs))]
database_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 3, k + 2*length(local.azs))]
enable_ipv6 = true
create_egress_only_igw = true
public_subnet_ipv6_prefixes = [for k, v in local.azs : k]
private_subnet_ipv6_prefixes = [for k, v in local.azs : k + length(local.azs)]
database_subnet_ipv6_prefixes = [for k, v in local.azs : k + 2*length(local.azs)]
private_subnet_assign_ipv6_address_on_creation = true
public_subnet_assign_ipv6_address_on_creation = true
enable_nat_gateway = var.enable_nat_gateway
enable_dns_hostnames = var.enable_dns_hostnames
enable_dns_support = var.enable_dns_support
tags = var.tags
public_subnet_tags = var.additional_public_subnet_tags
private_subnet_tags = var.additional_private_subnet_tags
instance_tenancy = var.instance_tenancy
create_database_subnet_group = true
create_database_subnet_route_table = true
create_database_internet_gateway_route = true
database_subnet_group_name = "${var.name}-${var.database_subnet_group_name}"
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "20.8.3"
cluster_name = var.eks_cluster_name
cluster_version = var.k8s_version
vpc_id = var.vpc_id
cluster_ip_family = var.cluster_ip_family
create_cni_ipv6_iam_policy = true
control_plane_subnet_ids = var.control_plane_subnet_ids
enable_cluster_creator_admin_permissions = true
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
cluster_endpoint_public_access_cidrs = var.public_access_cidrs
enable_irsa = true
cluster_addons = {
coredns = {
preserve = true
most_recent = true
}
kube-proxy = {
most_recent = true
}
vpc-cni = {
most_recent = true
}
aws-ebs-csi-driver = {
most_recent = true
}
aws-efs-csi-driver = {
most_recent = true
}
}
cluster_security_group_additional_rules = {
egress_nodes_ephemeral_ports_tcp = {
description = "To node 1025-65535"
protocol = "tcp"
from_port = 1025
to_port = 65535
type = "egress"
source_node_security_group = true
}
}
node_security_group_additional_rules = {
ingress_self_all = {
description = "Node to node all ports/protocols"
protocol = "-1"
from_port = 0
to_port = 0
type = "ingress"
self = true
}
}
subnet_ids = var.eks_node_groups_subnet_ids
eks_managed_node_groups = var.eks_managed_node_groups
eks_managed_node_group_defaults = var.eks_managed_node_group_defaults
}
resource "aws_security_group_rule" "allow_worker_nodes" {
security_group_id = module.eks.cluster_primary_security_group_id
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
source_security_group_id = module.eks.node_security_group_id
}