Skip to content

Attaching to kubernetes Provider Using Infra Provider's Specific Auth Pluginยค

This is a quick verification of the cloud infra provider specific K8s support, example DigitalOcean. This erradicates the need to even have a kubectl config local, you just need the Cloud Provider token.

I.e. it illustrates the implementation of this statement in the terraform docs about configuring the kubernetes provider:

Quote

There are many ways to configure the Kubernetes provider. We recommend them in the following order (most recommended first, least recommended last):

  1. Use cloud-specific auth plugins (for example, eks get-token, az get-token, gcloud config)
  2. Use oauth2 token
  3. Use TLS certificate credentials
  4. Use kubeconfig file by setting both config_path and config_context
  5. Use username and password (HTTP Basic Authorization)

In general: When there is an externally managed K8s cluster, which you did not set up and have no config local and you want to gather information about it, you can use the specific cloud infra tf provider's kubernetes support (here: digitalocean_kubernetes_cluster "data source"):

$ mkdir -p tmp/datasource_test && cd $_ 
$ cat << EOF > query.tf
terraform {
  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
}

variable "do_token" {}

# Configure the DigitalOcean Provider
provider "digitalocean" {
  token = var.do_token
}


data "digitalocean_kubernetes_cluster" "example" {
    name = "tf-do-cluster-d1"
}
// this way we can manage it, using "only" the token - no kubectl config required: 
provider "kubernetes" {
  host             = data.digitalocean_kubernetes_cluster.example.endpoint
  token            = data.digitalocean_kubernetes_cluster.example.kube_config[0].token
  cluster_ca_certificate = base64decode(
    data.digitalocean_kubernetes_cluster.example.kube_config[0].cluster_ca_certificate
  )
}
// output some values: 
output "endpoint" {
     value = data.digitalocean_kubernetes_cluster.example.endpoint
}
output "token" {
     value =  data.digitalocean_kubernetes_cluster.example.kube_config[0].token
     sensitive = true
}
EOF
$ export TF_VAR_do_token=$(pass show DO/pat)
$ terraform init
$ terraform apply -auto-approve
$ rm query.tf

$ mkdir -p tmp/datasource_test && cd $_ 
$ cat << EOF > query.tf                 
> terraform {       
>   required_providers {                
>     digitalocean = {                  
>       source = "digitalocean/digitalocean"                                    
>       version = "~> 2.0"              
>     }             
>   }               
> }                 
>                   
> variable "do_token" {}                
>                   
> # Configure the DigitalOcean Provider 
> provider "digitalocean" {             
>   token = var.do_token                
> }                 
>                   
>                   
> data "digitalocean_kubernetes_cluster" "example" {                            
>     name = "tf-do-cluster-d1"         
> }                 
> // this way we can manage it, using "only" the token - no kubectl config required:                
> provider "kubernetes" {               
>   host             = data.digitalocean_kubernetes_cluster.example.endpoint    
>   token            = data.digitalocean_kubernetes_cluster.example.kube_config[0].token            
>   cluster_ca_certificate = base64decode(                                      
>     data.digitalocean_kubernetes_cluster.example.kube_config[0].cluster_ca_certificate            
>   )               
> }                 
> // output some values:                
> output "endpoint" {                   
>      value = data.digitalocean_kubernetes_cluster.example.endpoint            
> }                 
> output "token" {  
>      value =  data.digitalocean_kubernetes_cluster.example.kube_config[0].token                   
>      sensitive = true                 
> }                 
> EOF               
$ 
$ export TF_VAR_do_token=$(pass show DO/pat)
$ terraform init

Initializing the backend...

Initializing provider plugins...        
- Finding latest version of hashicorp/kubernetes...                             
- Finding digitalocean/digitalocean versions matching "~> 2.0"...               
- Installing hashicorp/kubernetes v2.4.1...                                     
- Installed hashicorp/kubernetes v2.4.1 (signed by HashiCorp)                   
- Installing digitalocean/digitalocean v2.11.0...                               
- Installed digitalocean/digitalocean v2.11.0 (signed by a HashiCorp partner, key ID F82037E524B9C0E8)

Partner and community providers are signed by their developers.                 
If you'd like to know more about provider signing, you can read about it here:  
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider    
selections it made above. Include this file in your version control repository  
so that Terraform can guarantee to make the same selections by default when     
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see   
any changes that are required for your infrastructure. All Terraform commands   
should now work.

If you ever set or change modules or backend configuration for Terraform,       
rerun this command to reinitialize your working directory. If you forget, other 
commands will detect it and remind you to do so if necessary.
$ terraform apply -auto-approve

Changes to Outputs: 
  + endpoint = "https://f87c35a9-2545-4d28-a982-c42a70deb997.k8s.ondigitalocean.com"                
  + token    = (sensitive value)

You can apply this plan to save these new output values to the Terraform state, 
without changing any real infrastructure.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

endpoint = "https://f87c35a9-2545-4d28-a982-c42a70deb997.k8s.ondigitalocean.com"
token = <sensitive>
$ rm query.tf

AWS

Here a comparable way to do it on AWS:

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    command     = "aws"
    args = [
      "eks",
      "get-token",
      "--cluster-name",
      data.aws_eks_cluster.cluster.name
    ]
  }
}
Back to top