Here's a practical guide on how to manage Terraform provider configurations for different Yandex Cloud regions using Terragrunt.

What you'll need

Setup Steps

Let's look at how to use Terragrunt to dynamically create provider configs for Yandex Cloud. I'll break this down into digestible pieces:

  1. Basic provider setup

    First, we'll set up the base Yandex Cloud config in the root terragrunt.hcl. This will automatically generate versions.tf for each module:

    locals {
      tf_providers = {
        yandex = ">= 0.129.0"
      }
    }
    
    generate "providers_versions" {
      path      = "versions.tf"
      if_exists = "overwrite"
      contents  = <<EOF
    terraform {
      required_version = ">= 1.9.7"
    
      required_providers {
        yandex = {
          source  = "yandex-cloud/yandex"
          version = "${local.tf_providers.yandex}"
        }
      }
    }
    EOF
    }
    

  2. Region settings

    For regions like the newly created KZ region, additional endpoints need to be specified due to the default configuration for the RU region. We can specify them at the project level, for example env.hcl and the providers.tf is generated dynamically for each module:

    locals {
      cloud_id         = "SOME_ID"
      folder_id        = "SOME_ID"
      sa_key_file      = "${get_repo_root()}/key.json"
      endpoint         = "api.yandexcloud.kz:443"       # Region-Specific
      storage_endpoint = "storage.yandexcloud.kz"       # Region-Specific
    }
    
    generate "providers_configs" {
      path      = "providers.tf"
      if_exists = "overwrite_terragrunt"
      contents  = <<EOF
    provider "yandex" {
      service_account_key_file = "${local.sa_key_file}"
      cloud_id                 = "${local.cloud_id}"
      folder_id                = "${local.folder_id}"
      endpoint                 = "${local.endpoint}"
      storage_endpoint         = "${local.storage_endpoint}"
    }
    EOF
    }
    

  3. Additional providers

    If you're working with Kubernetes / Kubectl / Helm in Terraform, you'll need these additional provider configs to manage your cluster. The simplest and most straightforward solution would be to pass cluster_id from a terragrunt dependency into the called module:

    dependencies {
      paths = ["path/to/your/mks"]
    }
    
    dependency "mks" {
      config_path = "path/to/your/mks"
    
      mock_outputs_allowed_terraform_commands = ["init", "validate", "plan", "destroy"]
      mock_outputs_merge_strategy_with_state  = "shallow"
      mock_outputs = {
        cluster_id = "cluster_id"
      }
    }
    
    terraform {
      source = "path/to/your/module"
    }
    
    inputs = {
      cluster_id = dependency.mks.outputs.cluster_id
      . . .
      <OTHER_INPUTS>
      . . .
    }
    

Then use data resources in the module to configure providers:

variable "cluster_id" {
  type        = string
  default     = null
  description = "Managed Kubernetes Service cluster ID"
}

data "yandex_kubernetes_cluster" "this" {
  cluster_id = var.cluster_id
}

data "yandex_client_config" "this" {}

provider "kubernetes" {
  host                   = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint
  cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate
  token                  = data.yandex_client_config.this.iam_token
}

provider "helm" {
  kubernetes {
    host                   = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint
    cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate
    token                  = data.yandex_client_config.this.iam_token
  }
}

provider "kubectl" {
  host                   = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint
  cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate
  token                  = data.yandex_client_config.this.iam_token
}

Notes

Conclusion

This setup gives you a clean way to manage Terraform configs across different Yandex Cloud regions. It handles authentication properly and works well whether you're just using basic cloud resources or diving into Kubernetes and Helm deployments.