Skip to content

Terraform module for provisioning Nebuly Platform resources on Azure.

License

Notifications You must be signed in to change notification settings

nebuly-ai/terraform-azurerm-nebuly-platform

Repository files navigation

Nebuly Platform (Azure)

Terraform module for provisioning Nebuly Platform resources on Microsoft Azure.

Available on Terraform Registry.

Quickstart

⚠️ Prerequisite: before using this Terraform module, ensure that you have your Nebuly credentials ready. These credentials are necessary to activate your installation and should be provided as input via the nebuly_credentials input.

To get started with Nebuly installation on Microsoft Azure, you can follow the steps below.

These instructions will guide you through the installation using Nebuly's default standard configuration with the Nebuly Helm Chart.

For specific configurations or assistance, reach out to the Nebuly Slack channel or email [email protected].

1. Terraform setup

Import Nebuly into your Terraform root module, provide the necessary variables, and apply the changes.

For configuration examples, you can refer to the Examples.

Once the Terraform changes are applied, proceed with the next steps to deploy Nebuly on the provisioned Azure Kubernetes Service (AKS) cluster.

2. Connect to the Azure Kubernetes Service cluster

Prerequisites: install the Azure CLI.

  • Fetch the command for retrieving the credentials from the module outputs:
terraform output aks_get_credentials
  • Run the command you got from the previous step

3. Create image pull secret

The auto-generated Helm values use the name defined in the k8s_image_pull_secret_name input variable for the Image Pull Secret. If you prefer a custom name, update either the Terraform variable or your Helm values accordingly. Create a Kubernetes Image Pull Secret for authenticating with your Docker registry and pulling the Nebuly Docker images.

4. Bootstrap AKS cluster

Retrieve the auto-generated values from the Terraform outputs and save them to a file named values-bootstrap.yaml:

terraform output helm_values_bootstrap

Install the bootstrap Helm chart to set up all the dependencies required for installing the Nebuly Platform Helm chart on AKS.

Refer to the chart documentation for all the configuration details.

helm install oci://ghcr.io/nebuly-ai/helm-charts/bootstrap-azure \
  --namespace nebuly-bootstrap \
  --generate-name \
  --create-namespace \
  -f values-bootstrap.yaml

5. Create Secret Provider Class

Create a Secret Provider Class to allow AKS to fetch credentials from the provisioned Key Vault.

  • Get the Secret Provider Class YAML definition from the Terraform module outputs:

    terraform output secret_provider_class
  • Copy the output of the command into a file named secret-provider-class.yaml.

  • Run the following commands to install Nebuly in the Kubernetes namespace nebuly:

    kubectl create ns nebuly
    kubectl apply --server-side -f secret-provider-class.yaml

6. Install nebuly-platform chart

Retrieve the auto-generated values from the Terraform outputs and save them to a file named values.yaml:

terraform output helm_values

Install the Nebuly Platform Helm chart. Refer to the chart documentation for detailed configuration options.

helm install <your-release-name> oci://ghcr.io/nebuly-ai/helm-charts/nebuly-platform \
  --namespace nebuly \
  -f values.yaml \
  --timeout 30m 

ℹ️ During the initial installation of the chart, all required Nebuly LLMs are uploaded to your model registry. This process can take approximately 5 minutes. If the helm install command appears to be stuck, don't worry: it's simply waiting for the upload to finish.

7. Access Nebuly

Retrieve the IP of the Load Balancer to access the Nebuly Platform:

kubectl get svc -n nebuly-bootstrap -o jsonpath='{range .items[?(@.status.loadBalancer.ingress)]}{.status.loadBalancer.ingress[0].ip}{"\n"}{end}'

You can then register a DNS A record pointing to the Load Balancer IP address to access Nebuly via the custom domain you provided in the input variable platform_domain.

Examples

You can find examples of code that uses this Terraform module in the examples directory.

Providers

Name Version
azuread ~>2.53
azurerm ~>3.114
random ~>3.6
time ~>0.12
tls ~>4.0

Outputs

Name Description
aks_get_credentials Command for getting the credentials for connecting to the provisioned AKS cluster.
helm_values The values.yaml file for installing Nebuly with Helm.

The default standard configuration is used, which uses Nginx as ingress controller and exposes the application to the Internet. This configuration can be customized according to specific needs.
helm_values_bootstrap The bootrap.values.yaml file for installing the Nebuly Azure Boostrap chart with Helm.
secret_provider_class The secret-provider-class.yaml file to make Kubernetes reference the secrets stored in the Key Vault.

Inputs

Name Description Type Default Required
aks_cluster_admin_group_object_ids Object IDs that are granted the Cluster Admin role over the AKS cluster. set(string) n/a yes
aks_cluster_admin_users User Principal Names (UPNs) of the users that are granted the Cluster Admin role over the AKS cluster. set(string) n/a yes
aks_kubernetes_version The Kubernetes version to use.
object({
workers = string
control_plane = string
})
{
"control_plane": "1.30.3",
"workers": "1.30.3"
}
no
aks_log_analytics_workspace Existing azurerm_log_analytics_workspace to attach azurerm_log_analytics_solution. Providing the config disables creation of azurerm_log_analytics_workspace.
object({
id = string
name = string
location = optional(string)
resource_group_name = optional(string)
})
null no
aks_net_profile_dns_service_ip IP address within the Kubernetes service address range that is used by cluster service discovery (kube-dns). Must be inluced in net_profile_cidr. Example: 10.32.0.10 string "10.32.0.10" no
aks_net_profile_service_cidr The Network Range used by the Kubernetes service. Must not overlap with the AKS Nodes address space. Example: 10.32.0.0/24 string "10.32.0.0/24" no
aks_sku_tier The AKS tier. Possible values are: Free, Standard, Premium. It is recommended to use Standard or Premium for production workloads. string "Standard" no
aks_sys_pool The configuration of the AKS System Nodes Pool.
object({
vm_size : string
nodes_max_pods : number
name : string
availability_zones : list(string)
disk_size_gb : number
disk_type : string
nodes_labels : optional(map(string), {})
nodes_tags : optional(map(string), {})
only_critical_addons_enabled : optional(bool, false)
# Auto-scaling settings
nodes_count : optional(number, null)
enable_auto_scaling : optional(bool, false)
agents_min_count : optional(number, null)
agents_max_count : optional(number, null)
})
{
"agents_max_count": 1,
"agents_min_count": 1,
"availability_zones": [
"1",
"2",
"3"
],
"disk_size_gb": 128,
"disk_type": "Ephemeral",
"enable_auto_scaling": true,
"name": "system",
"nodes_max_pods": 60,
"only_critical_addons_enabled": false,
"vm_size": "Standard_E4ads_v5"
}
no
aks_worker_pools The worker pools of the AKS cluster, each with the respective configuration.
The default configuration uses a single worker node, with no HA.
map(object({
enabled : optional(bool, true)
vm_size : string
priority : optional(string, "Regular")
tags : map(string)
max_pods : number
disk_size_gb : optional(number, 128)
disk_type : string
availability_zones : list(string)
node_taints : optional(list(string), [])
node_labels : optional(map(string), {})
# Auto-scaling settings
nodes_count : optional(number, null)
enable_auto_scaling : optional(bool, false)
nodes_min_count : optional(number, null)
nodes_max_count : optional(number, null)
}))
{
"a100wr": {
"availability_zones": [
"1",
"2",
"3"
],
"disk_size_gb": 128,
"disk_type": "Ephemeral",
"enable_auto_scaling": true,
"max_pods": 30,
"node_labels": {
"nebuly.com/accelerator": "nvidia-ampere-a100"
},
"node_taints": [
"nvidia.com/gpu=:NoSchedule"
],
"nodes_count": null,
"nodes_max_count": 1,
"nodes_min_count": 0,
"priority": "Regular",
"tags": {},
"vm_size": "Standard_NC24ads_A100_v4"
},
"t4workers": {
"availability_zones": [
"1",
"2",
"3"
],
"disk_size_gb": 128,
"disk_type": "Ephemeral",
"enable_auto_scaling": true,
"max_pods": 30,
"node_labels": {
"nebuly.com/accelerator": "nvidia-tesla-t4"
},
"node_taints": [
"nvidia.com/gpu=:NoSchedule"
],
"nodes_count": null,
"nodes_max_count": 1,
"nodes_min_count": 0,
"priority": "Regular",
"tags": {},
"vm_size": "Standard_NC4as_T4_v3"
}
}
no
azure_openai_deployment_gpt4o ------ Azure OpenAI ------ #
object({
name : optional(string, "gpt-4o")
version : optional(string, "2024-08-06")
rate_limit : optional(number, 80)
enabled : optional(bool, true)
})
{} no
azure_openai_deployment_gpt4o_mini n/a
object({
name : optional(string, "gpt-4o-mini")
version : optional(string, "2024-07-18")
rate_limit : optional(number, 80)
enabled : optional(bool, true)
})
{} no
azure_openai_location The Azure region where to deploy the Azure OpenAI models.
Note that the models required by Nebuly are supported only in few specific regions. For more information, you can refer to Azure documentation:
https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#standard-deployment-model-availability
string "EastUS" no
k8s_image_pull_secret_name The name of the Kubernetes Image Pull Secret to use.
This value will be used to auto-generate the values.yaml file for installing the Nebuly Platform Helm chart.
string "nebuly-docker-pull" no
key_vault_public_network_access_enabled Can the Key Vault be accessed from the Internet, according to the firewall rules?
Default to true to to allow the Terraform module to be executed even outside the private virtual network.
When set to true, firewall rules are applied, and all connections are denied by default.
bool true no
key_vault_purge_protection_enabled Is purge protection enabled for the Key Vault? bool false no
key_vault_sku_name The SKU of the Key Vault. string "Standard" no
key_vault_soft_delete_retention_days The number of days that items should be retained for once soft-deleted. This value can be between 7 and 90 (the default) days. number 7 no
location The region where to provision the resources. string n/a yes
nebuly_credentials The credentials provided by Nebuly are required for activating your platform installation.
If you haven't received your credentials or have lost them, please contact [email protected].
object({
client_id : string
client_secret : string
})
n/a yes
okta_sso Settings for configuring the Okta SSO integration.
object({
issuer : string
client_id : string
client_secret : string
})
null no
platform_domain The domain on which the deployed Nebuly platform is made accessible. string n/a yes
postgres_override_name Override the name of the PostgreSQL Server. If not provided, the name is generated based on the resource_prefix. string null no
postgres_server_admin_username The username of the admin user of the PostgreSQL Server. string "nebulyadmin" no
postgres_server_alert_rules The Azure Monitor alert rules to set on the provisioned PostgreSQL server.
map(object({
description = string
frequency = string
window_size = string
action_group_id = string
severity = number

criteria = optional(
object({
aggregation = string
metric_name = string
operator = string
threshold = number
})
, null)
dynamic_criteria = optional(
object({
aggregation = string
metric_name = string
operator = string
alert_sensitivity = string
})
, null)
}))
{} no
postgres_server_high_availability High-availability configuration of the DB server. Possible values for mode are: SameZone or ZoneRedundant.
object({
enabled : bool
mode : optional(string, "SameZone")
standby_availability_zone : optional(string, null)
})
{
"enabled": true,
"mode": "SameZone"
}
no
postgres_server_lock Optionally lock the PostgreSQL server to prevent deletion.
object({
enabled = optional(bool, false)
notes = optional(string, "Cannot be deleted.")
name = optional(string, "terraform-lock")
})
{
"enabled": true
}
no
postgres_server_maintenance_window The window for performing automatic maintenance of the PostgreSQL Server. Default is Sunday at 00:00 of the timezone of the server location.
object({
day_of_week : number
start_hour : number
start_minute : number
})
{
"day_of_week": 0,
"start_hour": 0,
"start_minute": 0
}
no
postgres_server_max_storage_mb The max storage allowed for the PostgreSQL Flexible Server. Possible values are 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4193280, 4194304, 8388608, 16777216 and 33553408. number 262144 no
postgres_server_optional_configurations Optional Flexible PostgreSQL configurations. Defaults to recommended configurations. map(string)
{
"intelligent_tuning": "on",
"intelligent_tuning.metric_targets": "ALL",
"metrics.autovacuum_diagnostics": "on",
"metrics.collector_database_activity": "on",
"pg_qs.query_capture_mode": "ALL",
"pg_qs.retention_period_in_days": "7",
"pg_qs.store_query_plans": "on",
"pgaudit.log": "WRITE",
"pgms_wait_sampling.query_capture_mode": "ALL",
"track_io_timing": "on"
}
no
postgres_server_point_in_time_backup The backup settings of the PostgreSQL Server.
object({
geo_redundant : optional(bool, true)
retention_days : optional(number, 30)
})
{
"geo_redundant": true,
"retention_days": 30
}
no
postgres_server_sku The SKU of the PostgreSQL Server, including the Tier and the Name. Examples: B_Standard_B1ms, GP_Standard_D2s_v3, MO_Standard_E4s_v3
object({
tier : string
name : string
})
{
"name": "Standard_D4ds_v5",
"tier": "GP"
}
no
postgres_version The PostgreSQL version to use. string "16" no
private_dns_zones Private DNS zones to use for Private Endpoint connections. If not provided, a new DNS Zone
is created and linked to the respective subnet.
object({
flexible_postgres = optional(object({
name : string
id : string
}), null)
})
{} no
resource_group_name The name of the resource group where to provision the resources. string n/a yes
resource_prefix The prefix that is used for generating resource names. string n/a yes
storage_account_override_name Override the name of the Storage Account. If not provided, the name is generated based on the resource_prefix. string null no
subnet_address_space_aks_nodes Address space of the new subnet in which to create the nodes of the AKS cluster.
If subnet_name_aks_nodes is provided, the existing subnet is used and this variable is ignored.
list(string)
[
"10.0.0.0/22"
]
no
subnet_address_space_flexible_postgres Address space of the new subnet delgated to Flexible PostgreSQL Server service.
If subnet_name_flexible_postgres is provided, the existing subnet is used and this variable is ignored.
list(string)
[
"10.0.12.0/26"
]
no
subnet_address_space_private_endpoints Address space of the new subnet in which to create private endpoints.
If subnet_name_private_endpoints is provided, the existing subnet is used and this variable is ignored.
list(string)
[
"10.0.8.0/26"
]
no
subnet_name_aks_nodes Optional name of the subnet to be used for provisioning AKS nodes.
If not provided, a new subnet is created.
string null no
subnet_name_flexible_postgres Optional name of the subnet delegated to Flexible PostgreSQL Server service.
If not provided, a new subnet is created.
string null no
subnet_name_private_endpoints Optional name of the subnet to which attach the Private Endpoints.
If not provided, a new subnet is created.
string null no
tags Common tags that are applied to all resources. map(string) {} no
virtual_network_address_space Address space of the new virtual network in which to create resources.
If virtual_network_name is provided, the existing virtual network is used and this variable is ignored.
list(string)
[
"10.0.0.0/16"
]
no
virtual_network_name Optional name of the virtual network in which to create the resources.
If not provided, a new virtual network is created.
string null no
whitelisted_ips Optional list of IPs or IP Ranges that will be able to access the following resources from the internet: Azure Kubernetes Service (AKS) API Server,
Azure Key Vault, Azure Storage Account. If 0.0.0.0/0 (default value), no whitelisting is enforced and the resources are accessible from all IPs.

The whitelisting excludes the Database Server, which remains unexposed to the Internet and is accessible only from the virtual network.
list(string)
[
"0.0.0.0/0"
]
no

Resources

  • resource.azuread_application.main (/terraform-docs/main.tf#231)
  • resource.azuread_group.aks_admins (/terraform-docs/main.tf#555)
  • resource.azuread_group_member.aks_admin_users (/terraform-docs/main.tf#559)
  • resource.azuread_service_principal.main (/terraform-docs/main.tf#237)
  • resource.azuread_service_principal_password.main (/terraform-docs/main.tf#242)
  • resource.azurerm_cognitive_account.main (/terraform-docs/main.tf#449)
  • resource.azurerm_cognitive_deployment.gpt_4o (/terraform-docs/main.tf#469)
  • resource.azurerm_cognitive_deployment.gpt_4o_mini (/terraform-docs/main.tf#486)
  • resource.azurerm_key_vault.main (/terraform-docs/main.tf#190)
  • resource.azurerm_key_vault_secret.azure_openai_api_key (/terraform-docs/main.tf#503)
  • resource.azurerm_key_vault_secret.azuread_application_client_id (/terraform-docs/main.tf#246)
  • resource.azurerm_key_vault_secret.azuread_application_client_secret (/terraform-docs/main.tf#255)
  • resource.azurerm_key_vault_secret.jwt_signing_key (/terraform-docs/main.tf#693)
  • resource.azurerm_key_vault_secret.nebuly_azure_client_id (/terraform-docs/main.tf#268)
  • resource.azurerm_key_vault_secret.nebuly_azure_client_secret (/terraform-docs/main.tf#277)
  • resource.azurerm_key_vault_secret.okta_sso_client_id (/terraform-docs/main.tf#705)
  • resource.azurerm_key_vault_secret.okta_sso_client_secret (/terraform-docs/main.tf#716)
  • resource.azurerm_key_vault_secret.postgres_password (/terraform-docs/main.tf#432)
  • resource.azurerm_key_vault_secret.postgres_user (/terraform-docs/main.tf#423)
  • resource.azurerm_kubernetes_cluster_node_pool.linux_pools (/terraform-docs/main.tf#650)
  • resource.azurerm_management_lock.postgres_server (/terraform-docs/main.tf#366)
  • resource.azurerm_monitor_metric_alert.postgres_server_alerts (/terraform-docs/main.tf#374)
  • resource.azurerm_postgresql_flexible_server.main (/terraform-docs/main.tf#296)
  • resource.azurerm_postgresql_flexible_server_configuration.mandatory_configurations (/terraform-docs/main.tf#347)
  • resource.azurerm_postgresql_flexible_server_configuration.optional_configurations (/terraform-docs/main.tf#340)
  • resource.azurerm_postgresql_flexible_server_database.analytics (/terraform-docs/main.tf#360)
  • resource.azurerm_postgresql_flexible_server_database.auth (/terraform-docs/main.tf#354)
  • resource.azurerm_private_dns_zone.flexible_postgres (/terraform-docs/main.tf#169)
  • resource.azurerm_private_dns_zone_virtual_network_link.flexible_postgres (/terraform-docs/main.tf#175)
  • resource.azurerm_role_assignment.aks_network_contributor (/terraform-docs/main.tf#645)
  • resource.azurerm_role_assignment.key_vault_secret_officer__current (/terraform-docs/main.tf#221)
  • resource.azurerm_role_assignment.key_vault_secret_user__aks (/terraform-docs/main.tf#213)
  • resource.azurerm_role_assignment.storage_container_models__data_contributor (/terraform-docs/main.tf#541)
  • resource.azurerm_storage_account.main (/terraform-docs/main.tf#517)
  • resource.azurerm_storage_container.models (/terraform-docs/main.tf#537)
  • resource.azurerm_subnet.aks_nodes (/terraform-docs/main.tf#125)
  • resource.azurerm_subnet.flexible_postgres (/terraform-docs/main.tf#147)
  • resource.azurerm_subnet.private_endpints (/terraform-docs/main.tf#139)
  • resource.azurerm_virtual_network.main (/terraform-docs/main.tf#117)
  • resource.random_password.postgres_server_admin_password (/terraform-docs/main.tf#291)
  • resource.time_sleep.wait_aks_creation (/terraform-docs/main.tf#632)
  • resource.tls_private_key.aks (/terraform-docs/main.tf#551)
  • resource.tls_private_key.jwt_signing_key (/terraform-docs/main.tf#689)
  • data source.azuread_user.aks_admins (/terraform-docs/main.tf#81)
  • data source.azurerm_client_config.current (/terraform-docs/main.tf#73)
  • data source.azurerm_resource_group.main (/terraform-docs/main.tf#70)
  • data source.azurerm_subnet.aks_nodes (/terraform-docs/main.tf#86)
  • data source.azurerm_subnet.flexible_postgres (/terraform-docs/main.tf#100)
  • data source.azurerm_virtual_network.main (/terraform-docs/main.tf#75)

About

Terraform module for provisioning Nebuly Platform resources on Azure.

Resources

License

Stars

Watchers

Forks

Packages

No packages published