Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a NKE node pool corrupts the state #672

Open
olivierboudet opened this issue Feb 7, 2024 · 0 comments
Open

Adding a NKE node pool corrupts the state #672

olivierboudet opened this issue Feb 7, 2024 · 0 comments

Comments

@olivierboudet
Copy link

Nutanix Cluster Information

  • Nutanix Cluster 6.5.2
  • Nutanix Prism Central 2022.6.0.2
  • NKE 2.8

Terraform Version

Terraform v1.5.0
on linux_amd64
+ provider registry.terraform.io/nutanix/nutanix v1.9.1

Affected Resource(s)

  • nutanix_karbon_cluster
  • nutanix_karbon_worker_nodepool

Terraform Configuration Files

resource "nutanix_karbon_cluster" "mycluster" {
  name       = "mycluster"
  version    = "1.25.6-0"
  storage_class_config {
    reclaim_policy = "Retain"
    volumes_config {
      file_system                = "ext4"
      flash_mode                 = true
      password                   = var.nutanix_password
      prism_element_cluster_uuid = "0005f997-7997-aa1a-5b4a-00620b377eb0"
      storage_container          = "NutanixKubernetesEngine"
      username                   = var.nutanix_user
    }
  }
  cni_config {
    node_cidr_mask_size = 24
    pod_ipv4_cidr       = "10.98.0.0/16"
    service_ipv4_cidr   = "10.99.0.0/16"
  }
  worker_node_pool {
    node_os_version = "ntnx-1.5"
    num_instances   = 1
    ahv_config {
      cpu = 10
      memory_mib = 16384
      network_uuid               = nutanix_subnet.kubernetes.id
      prism_element_cluster_uuid = "0005f997-7997-aa1a-5b4a-00620b377eb0"
    }
  }

  etcd_node_pool {
    node_os_version = "ntnx-1.5"
    num_instances   = 1
    ahv_config {
      cpu = 4
      memory_mib = 8192
      network_uuid               =  nutanix_subnet.kubernetes.id
      prism_element_cluster_uuid = "0005f997-7997-aa1a-5b4a-00620b377eb0"
    }
  }
  master_node_pool {
    node_os_version = "ntnx-1.5"
    num_instances   = 1
    ahv_config {
      cpu = 4
      memory_mib = 4096
      network_uuid               =  nutanix_subnet.kubernetes.id
      prism_element_cluster_uuid = "0005f997-7997-aa1a-5b4a-00620b377eb0"
    }
  }
  private_registry {
    registry_name = nutanix_karbon_private_registry.registry.name
  }

  lifecycle {
    ignore_changes = [
      worker_node_pool,
      storage_class_config,
    ]
  }
}

resource "nutanix_karbon_worker_nodepool" "mynodepool" {
  cluster_name = nutanix_karbon_cluster.mycluster.name
  name = "mynodepool"
  num_instances = 1
  node_os_version = "ntnx-1.5"

  ahv_config {
    cpu = 2
    memory_mib = 8192
    network_uuid               = nutanix_subnet.kubernetes.id
    prism_element_cluster_uuid = "0005f997-7997-aa1a-5b4a-00620b377eb0"
  }

  labels={
    partenaire="mypartenaire"
  }

}

Debug Output

Expected Behavior

After adding a nutanix_karbon_worker_nodepool it should be possible to add a second one.
ie. this should work:

  • add a nutanix_karbon_worker_nodepool resource
  • terraform apply
  • add another nutanix_karbon_worker_nodepool resource
  • terraform apply

Actual Behavior

The last terraform apply actually fails with this output:

$ terraform apply
nutanix_karbon_cluster.mycluster: Refreshing state... [id=14e1857b-46b2-4f49-410a-2e8ed0ce22e9]

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Warning: Disabled Providers: foundation, ndb. Please provide required fields in provider configuration to enable them. Refer docs.
│
│   with provider["registry.terraform.io/nutanix/nutanix"],
│   on main.tf line 19, in provider "nutanix":
│   19: provider "nutanix" {
│
╵
╷
│ Error: unable to expand node pool during flattening: nodepool name must be passed
│
│   with nutanix_karbon_cluster.mycluster,
│   on nke.tf line 1, in resource "nutanix_karbon_cluster" "mycluster":
│    1: resource "nutanix_karbon_cluster" "mycluster" {
│
╵

Steps to Reproduce

  • add a nutanix_karbon_worker_nodepool resource
  • terraform apply
  • add another nutanix_karbon_worker_nodepool resource
  • terraform apply

    In the state, you should find a block corrupted in the worker_node_pool section of the nutanix_karbon_cluster :
{
"ahv_config": [],
"name": null,
"node_os_version": null,
"nodes": null,
"num_instances": null
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant