代码之家  ›  专栏  ›  技术社区  ›  alexis

在具有地形的AKS上部署Kubernetes

  •  0
  • alexis  · 技术社区  · 3 年前

    试图部署 aws-load-balancer-controller 关于库伯内特斯。

    我有以下TF代码:

    resource "kubernetes_deployment" "ingress" {
      metadata {
        name      = "alb-ingress-controller"
        namespace = "kube-system"
        labels = {
          app.kubernetes.io/name = "alb-ingress-controller" 
          app.kubernetes.io/version = "v2.2.3"
          app.kubernetes.io/managed-by = "terraform"
        }
      }
    
      spec {
        replicas = 1
    
        selector {
          match_labels = {
            app.kubernetes.io/name = "alb-ingress-controller" 
          }
        }
    
        strategy {
          type = "Recreate"
        }
    
        template {
          metadata {
            labels = {
                    app.kubernetes.io/name = "alb-ingress-controller" 
                    app.kubernetes.io/version = "v2.2.3"
            }
          }
    
          spec {
            dns_policy                       = "ClusterFirst"
            restart_policy                   = "Always"
            service_account_name             = kubernetes_service_account.ingress.metadata[0].name
            termination_grace_period_seconds = 60
    
            container {
              name              = "alb-ingress-controller"
              image             = "docker.io/amazon/aws-alb-ingress-controller:v2.2.3"
              image_pull_policy = "Always"
    
              args = [
                "--ingress-class=alb",
                "--cluster-name=${local.k8s[var.env].esk_cluster_name}",
                "--aws-vpc-id=${local.k8s[var.env].cluster_vpc}",
                "--aws-region=${local.k8s[var.env].region}"
              ]
              volume_mount {
                mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
                name       = kubernetes_service_account.ingress.default_secret_name
                read_only  = true
              }
            }
            volume {
              name = kubernetes_service_account.ingress.default_secret_name
    
              secret {
                secret_name = kubernetes_service_account.ingress.default_secret_name
              }
            }
          }
        }
      }
    
      depends_on = [kubernetes_cluster_role_binding.ingress]
    }
    
    resource "kubernetes_ingress" "app" {
      metadata {
        name      = "owncloud-lb"
        namespace = "fargate-node"
        annotations = {
          "kubernetes.io/ingress.class"           = "alb"
          "alb.ingress.kubernetes.io/scheme"      = "internet-facing"
          "alb.ingress.kubernetes.io/target-type" = "ip"
        }
        labels = {
          "app" = "owncloud"
        }
      }
    
      spec {
        backend {
          service_name = "owncloud-service"
          service_port = 80
        }
        rule {
          http {
            path {
              path = "/"
              backend {
                service_name = "owncloud-service"
                service_port = 80
              }
            }
          }
        }
      }
      depends_on = [kubernetes_service.app]
    }
    

    这是最新版本 1.9 按要求。只要我升级到 2.2.3 pod无法更新,pod上出现以下错误: {"level":"error","ts":1629207071.4385357,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}

    我已经阅读了文件的更新,并修改了IAM政策,但他们也提到:

    更新TargetGroupBinding CRD

    我不知道如何使用terraform

    如果我尝试在一个新集群上部署(例如,不是从1.9升级,我会得到相同的错误),我会得到相同的错误。

    0 回复  |  直到 3 年前
        1
  •  1
  •   Jonas    3 年前

    使用地形代码,您可以应用 Deployment 还有 Ingress 资源,但您还必须添加 CustomResourceDefinitions 对于 TargetGroupBinding 自定义资源。

    这在中的“将控制器添加到集群”下进行了描述 Load Balancer Controller installation documentation -提供了赫尔姆和库伯内特斯·亚马尔的例子。

    地形 beta support for applying CRDs 包括 example of deploying CustomResourceDefinition .