Risorse terraform per il progetto s2i2s-proj.

This commit is contained in:
Andrea Dell'Amico 2026-02-01 22:51:32 +01:00
parent 5ba6e8c8a1
commit 957eada881
Signed by: adellam
GPG Key ID: 147ABE6CEB9E20FF
15 changed files with 1439 additions and 0 deletions

View File

@ -0,0 +1,9 @@
{
"permissions": {
"allow": [
"Bash(terraform init:*)",
"Bash(terraform validate:*)",
"Bash(terraform plan:*)"
]
}
}

View File

@ -133,6 +133,9 @@ variable "ssh_sources" {
default = { default = {
s2i2s_vpn_1_cidr = "146.48.28.10/32" s2i2s_vpn_1_cidr = "146.48.28.10/32"
s2i2s_vpn_2_cidr = "146.48.28.11/32" s2i2s_vpn_2_cidr = "146.48.28.11/32"
d4s_vpn_1_cidr = "146.48.122.27/32"
d4s_vpn_2_cidr = "146.48.122.49/32"
shell_d4s_cidr = "146.48.122.95/32"
isti_vpn_gw1 = "146.48.80.101/32" isti_vpn_gw1 = "146.48.80.101/32"
isti_vpn_gw2 = "146.48.80.102/32" isti_vpn_gw2 = "146.48.80.102/32"
isti_vpn_gw3 = "146.48.80.103/32" isti_vpn_gw3 = "146.48.80.103/32"

View File

@ -0,0 +1,253 @@
# S2I2S Project Setup
This Terraform configuration sets up the core infrastructure components for the S2I2S OpenStack project.
## Overview
The project-setup module creates the following resources:
### Virtual Machines
| VM | Purpose | Flavor | Floating IP | DNS Record |
| --- | --- | --- | --- | --- |
| SSH Jump Proxy | Secure SSH gateway for accessing internal VMs | m1.small | Yes | `ssh-jump.s2i2s.cloud.isti.cnr.it` |
| Internal CA | Certificate Authority for internal services | m1.small | No | - |
| HAProxy L7 (x2) | Layer 7 load balancers behind Octavia | m1.medium | No | - |
| Prometheus | Monitoring and metrics collection | m1.medium | Yes | `prometheus.s2i2s.cloud.isti.cnr.it` |
All VMs run Ubuntu 24.04 and are provisioned with the standard cloud-init user data script.
### Load Balancer
An OVN-based Octavia load balancer (`s2i2s-cloud-l4-load-balancer`) provides L4 load balancing:
- **Floating IP**: Yes
- **DNS Record**: `octavia-main-lb.s2i2s.cloud.isti.cnr.it`
- **Backend**: HAProxy L7 instances (anti-affinity for HA)
| Listener | Port | Protocol | Health Check |
| --- | --- | --- | --- |
| HTTP | 80 | TCP | HTTP GET `/_haproxy_health_check` |
| HTTPS | 443 | TCP | HTTPS GET `/_haproxy_health_check` |
| Stats | 8880 | TCP | TCP connect |
### Security Groups
| Security Group | Purpose | Use On |
| --- | --- | --- |
| `s2i2s-default-sg` | Default rules: SSH via jump proxy, ICMP, Prometheus node exporter | All VMs |
| `ssh_access_to_the_jump_node` | SSH access from VPN endpoints | SSH Jump Proxy only |
| `debugging_from_jump_node` | Web debugging via SSH tunnels (ports 80, 443, 8100) | VMs needing debug access |
| `traffic_from_the_main_load_balancers` | HTTP/HTTPS from HAProxy L7 (ports 80, 443, 8080, 8888) | Backend web services |
| `traffic_from_main_lb_to_haproxy_l7` | Traffic from Octavia LB to HAProxy | HAProxy L7 VMs |
| `public_web_service` | HTTP/HTTPS from anywhere | Public-facing services with floating IP |
| `restricted_web_service` | HTTP from anywhere, HTTPS from VPNs only | Restricted services with floating IP |
| `prometheus_access_from_grafana` | HTTPS access from public Grafana server | Prometheus VM |
### Storage
- **Prometheus Data Volume**: 100 GB SSD (CephSSD) with online resize enabled
## Architecture
```text
Internet
|
+-------------------+-------------------+
| | |
[SSH Jump Proxy] [Octavia LB] [Prometheus]
| (Floating IP) (Floating IP)
| |
| +-------+-------+
| | |
| [HAProxy L7-01] [HAProxy L7-02]
| | |
| +-------+-------+
| |
+-------------------+
|
[Internal Network]
|
+-------+-------+
| |
[Internal CA] [Backend VMs]
```
## Prerequisites
1. The `main_net_dns_router` configuration must be applied first (creates network, subnet, DNS zone)
2. SSH key must be configured in the OpenStack project
3. OpenStack credentials must be configured (via `clouds.yaml` or environment variables)
## Usage
```bash
# Initialize Terraform
terraform init
# Review the plan
terraform plan
# Apply the configuration
terraform apply
```
## SSH Jump Proxy Configuration
To access VMs in the S2I2S cloud, you must use the SSH jump proxy. Add the following configuration to your `~/.ssh/config` file:
```ssh-config
# S2I2S SSH Jump Proxy
# Replace <your_username> with your actual username
Host s2i2s-jump
HostName ssh-jump.s2i2s.cloud.isti.cnr.it
User <your_username>
IdentityFile ~/.ssh/your_private_key
ForwardAgent yes
# Keep connection alive
ServerAliveInterval 60
ServerAliveCountMax 3
# Pattern match for all S2I2S internal hosts by IP
# Matches any IP in the 10.10.0.x range
# Usage: ssh 10.10.0.10
Host 10.10.0.*
User <your_username>
ForwardAgent yes
ProxyJump <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
# Alternative: named aliases for specific internal hosts
Host s2i2s-prometheus
HostName 10.10.0.10
User <your_username>
ForwardAgent yes
ProxyJump <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
Host s2i2s-ca
HostName 10.10.0.4
User <your_username>
ForwardAgent yes
ProxyJump <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
Host s2i2s-haproxy-01
HostName 10.10.0.11
User <your_username>
ForwardAgent yes
ProxyJump <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
Host s2i2s-haproxy-02
HostName 10.10.0.12
User <your_username>
ForwardAgent yes
ProxyJump <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
```
### SSH Usage Examples
```bash
# Connect to the jump proxy directly
ssh s2i2s-jump
# Connect to an internal VM by IP (using pattern match from ssh config)
ssh 10.10.0.10
# Connect to a named internal host (if configured in ssh config)
ssh s2i2s-prometheus
# Connect without ssh config (replace <your_username>)
ssh -J <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it <your_username>@10.10.0.10
# Copy a file to an internal VM
scp -J <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it localfile.txt <your_username>@10.10.0.10:/tmp/
# Forward a local port to an internal service
ssh -L 8080:10.10.0.30:80 s2i2s-jump
# Create a SOCKS proxy through the jump host
ssh -D 1080 s2i2s-jump
# Then configure your browser to use SOCKS5 proxy at localhost:1080
```
### SSH Debugging via Tunnel
For debugging web applications on internal VMs, you can create SSH tunnels:
```bash
# Forward local port 8100 to a Tomcat debug port on internal VM
# (requires s2i2s-jump defined in ssh config)
ssh -L 8100:10.10.0.50:8100 s2i2s-jump
# Forward local port 8080 to HTTP on internal VM
ssh -L 8080:10.10.0.50:80 s2i2s-jump
# Forward local port 8443 to HTTPS on internal VM
ssh -L 8443:10.10.0.50:443 s2i2s-jump
# Without ssh config (replace <your_username>)
ssh -L 8080:10.10.0.50:80 <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
```
## Outputs
The module exports the following outputs for use by other Terraform configurations:
### VM IDs and IPs
- `ssh_jump_proxy_id`, `ssh_jump_proxy_public_ip`, `ssh_jump_proxy_hostname`
- `internal_ca_id`
- `main_haproxy_l7_ids`
- `prometheus_server_id`, `prometheus_public_ip`, `prometheus_hostname`
### Load Balancer Outputs
- `main_loadbalancer_id`, `main_loadbalancer_ip`, `main_loadbalancer_public_ip`, `main_loadbalancer_hostname`
### Security Group Outputs
- `default_security_group`, `default_security_group_id`, `default_security_group_name`
- `access_to_the_jump_proxy`
- `debugging`
- `traffic_from_main_haproxy`
- `public_web`
- `restricted_web`
- `main_lb_to_haproxy_l7_security_group`
- `prometheus_access_from_grafana`
### Network Outputs (re-exported from main_net_dns_router)
- `dns_zone`, `dns_zone_id`
- `main_private_network`, `main_private_subnet`, `main_subnet_network_id`
- `basic_services_ip`, `main_haproxy_l7_ip`
## File Structure
```text
project-setup/
├── provider.tf # OpenStack provider configuration
├── main.tf # Module references and local variables
├── security-groups.tf # All security group definitions
├── ssh-jump-proxy.tf # SSH jump proxy VM and floating IP
├── internal-ca.tf # Internal CA VM
├── haproxy.tf # HAProxy L7 VMs (pair with anti-affinity)
├── prometheus.tf # Prometheus VM with data volume
├── octavia.tf # OVN-based Octavia load balancer
├── outputs.tf # Output definitions
└── README.md # This file
```
## Dependencies
This module depends on:
- `../main_net_dns_router` - Network, subnet, router, and DNS zone
- `../variables` - Project-specific variables
- `../../modules/labs_common_variables` - Common variables (images, flavors, etc.)
- `../../modules/ssh-key-ref` - SSH key reference
## Notes
- The HAProxy L7 VMs are deployed with anti-affinity to ensure they run on different hypervisors
- All VMs use volume-backed boot disks with `delete_on_termination = false` for data persistence
- The Prometheus data volume uses CephSSD storage for better I/O performance
- Volumes have `enable_online_resize = true` for live resizing capability
- Security groups are designed to minimize attack surface while allowing necessary traffic flows

View File

@ -0,0 +1,120 @@
#
# HAPROXY L7 behind the main Octavia load balancer
#
# Server group for anti-affinity (VMs on different hosts)
resource "openstack_compute_servergroup_v2" "main_haproxy_l7" {
name = "main_haproxy_l7"
policies = ["anti-affinity"]
}
# Security group for traffic from Octavia LB to HAProxy
resource "openstack_networking_secgroup_v2" "main_lb_to_haproxy_l7" {
name = "traffic_from_main_lb_to_haproxy_l7"
delete_default_rules = "true"
description = "Traffic coming from the main L4 lb directed to the haproxy l7 servers"
}
resource "openstack_networking_secgroup_rule_v2" "haproxy_l7_1_peer" {
security_group_id = openstack_networking_secgroup_v2.main_lb_to_haproxy_l7.id
description = "Peer traffic from haproxy l7 1 to l7 2"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10000
port_range_max = 10000
remote_ip_prefix = local.basic_services_ip.haproxy_l7_1_cidr
}
resource "openstack_networking_secgroup_rule_v2" "haproxy_l7_2_peer" {
security_group_id = openstack_networking_secgroup_v2.main_lb_to_haproxy_l7.id
description = "Peer traffic from haproxy l7 2 to l7 1"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10000
port_range_max = 10000
remote_ip_prefix = local.basic_services_ip.haproxy_l7_2_cidr
}
resource "openstack_networking_secgroup_rule_v2" "octavia_to_haproxy_l7_80" {
security_group_id = openstack_networking_secgroup_v2.main_lb_to_haproxy_l7.id
description = "Traffic from the octavia lb instance to HAPROXY l7 port 80"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 80
port_range_max = 80
remote_ip_prefix = local.main_private_subnet.cidr
}
resource "openstack_networking_secgroup_rule_v2" "octavia_to_haproxy_l7_443" {
security_group_id = openstack_networking_secgroup_v2.main_lb_to_haproxy_l7.id
description = "Traffic from the octavia lb instance to HAPROXY l7 port 443"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = local.main_private_subnet.cidr
}
resource "openstack_networking_secgroup_rule_v2" "octavia_to_haproxy_l7_8880" {
security_group_id = openstack_networking_secgroup_v2.main_lb_to_haproxy_l7.id
description = "Traffic from the octavia lb instance to HAPROXY l7 port 8880"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 8880
port_range_max = 8880
remote_ip_prefix = local.main_private_subnet.cidr
}
# Ports in the main private network for HAProxy instances
resource "openstack_networking_port_v2" "main_haproxy_l7_port" {
count = local.haproxy_l7_data.vm_count
name = format("%s-%02d-port", local.haproxy_l7_data.name, count.index + 1)
admin_state_up = true
network_id = local.main_private_network_id
security_group_ids = [
openstack_networking_secgroup_v2.default.id,
openstack_networking_secgroup_v2.main_lb_to_haproxy_l7.id
]
fixed_ip {
subnet_id = local.main_private_subnet_id
ip_address = local.main_haproxy_l7_ip[count.index]
}
}
# HAProxy L7 instances
resource "openstack_compute_instance_v2" "main_haproxy_l7" {
count = local.haproxy_l7_data.vm_count
name = format("%s-%02d", local.haproxy_l7_data.name, count.index + 1)
availability_zone_hints = local.availability_zones_names.availability_zone_no_gpu
flavor_name = local.haproxy_l7_data.flavor
key_pair = module.ssh_settings.ssh_key_name
scheduler_hints {
group = openstack_compute_servergroup_v2.main_haproxy_l7.id
}
block_device {
uuid = local.ubuntu_2404.uuid
source_type = "image"
volume_size = 10
boot_index = 0
destination_type = "volume"
delete_on_termination = false
}
network {
port = openstack_networking_port_v2.main_haproxy_l7_port[count.index].id
}
user_data = file("${local.ubuntu2404_data_file}")
# Do not replace the instance when the ssh key changes
lifecycle {
ignore_changes = [
key_pair, user_data, network
]
}
}

View File

@ -0,0 +1,41 @@
# Internal Certificate Authority VM
# Port in the main private network
resource "openstack_networking_port_v2" "internal_ca_port" {
name = "${local.internal_ca_data.name}-port"
admin_state_up = true
network_id = local.main_private_network_id
security_group_ids = [openstack_networking_secgroup_v2.default.id]
fixed_ip {
subnet_id = local.main_private_subnet_id
ip_address = local.basic_services_ip.ca
}
}
resource "openstack_compute_instance_v2" "internal_ca" {
name = local.internal_ca_data.name
availability_zone_hints = local.availability_zones_names.availability_zone_no_gpu
flavor_name = local.internal_ca_data.flavor
key_pair = module.ssh_settings.ssh_key_name
block_device {
uuid = local.ubuntu_2404.uuid
source_type = "image"
volume_size = 10
boot_index = 0
destination_type = "volume"
delete_on_termination = false
}
network {
port = openstack_networking_port_v2.internal_ca_port.id
}
user_data = file("${local.ubuntu2404_data_file}")
# Do not replace the instance when the ssh key changes
lifecycle {
ignore_changes = [
key_pair, user_data, network
]
}
}

View File

@ -0,0 +1,71 @@
# S2I2S Project Setup
# This module sets up the core infrastructure components:
# - Security groups
# - SSH jump proxy VM
# - Internal CA VM
# - HAProxy L7 VMs (targets for Octavia)
# - Prometheus VM
# - OVN-based Octavia load balancer
# Load common variables
module "labs_common_variables" {
source = "../../modules/labs_common_variables"
}
# Load project-specific variables
module "project_variables" {
source = "../variables"
}
# Reference the network/DNS state from main_net_dns_router
data "terraform_remote_state" "privnet_dns_router" {
backend = "local"
config = {
path = "../main_net_dns_router/terraform.tfstate"
}
}
# SSH key reference module
module "ssh_settings" {
source = "../../modules/ssh-key-ref"
}
# Local variables for easier access
locals {
# From network/DNS state
dns_zone_id = data.terraform_remote_state.privnet_dns_router.outputs.dns_zone_id
dns_zone = data.terraform_remote_state.privnet_dns_router.outputs.dns_zone
main_private_network = data.terraform_remote_state.privnet_dns_router.outputs.main_private_network
main_private_network_id = data.terraform_remote_state.privnet_dns_router.outputs.main_private_network_id
main_private_subnet = data.terraform_remote_state.privnet_dns_router.outputs.main_subnet_network
main_private_subnet_id = data.terraform_remote_state.privnet_dns_router.outputs.main_subnet_network_id
os_project_data = data.terraform_remote_state.privnet_dns_router.outputs.os_project_data
# From project variables
basic_services_ip = module.project_variables.basic_services_ip
main_haproxy_l7_ip = module.project_variables.main_haproxy_l7_ip
default_security_group_name = module.project_variables.default_security_group_name
# From common variables
ssh_sources = module.labs_common_variables.ssh_sources
floating_ip_pools = module.labs_common_variables.floating_ip_pools
ssh_jump_proxy = module.labs_common_variables.ssh_jump_proxy
internal_ca_data = module.labs_common_variables.internal_ca_data
ubuntu_2204 = module.labs_common_variables.ubuntu_2204
ubuntu_2404 = module.labs_common_variables.ubuntu_2404
availability_zones_names = module.labs_common_variables.availability_zones_names
ubuntu2204_data_file = module.labs_common_variables.ubuntu2204_data_file
ubuntu2404_data_file = module.labs_common_variables.ubuntu2404_data_file
mtu_size = module.labs_common_variables.mtu_size
main_region = module.labs_common_variables.main_region
resolvers_ip = module.labs_common_variables.resolvers_ip
# From project variables - HAProxy and Prometheus
haproxy_l7_data = module.project_variables.haproxy_l7_data
prometheus_server_data = module.project_variables.prometheus_server_data
# Octavia LB settings for OVN driver
octavia_lb_name = module.project_variables.main_octavia_lb_name
octavia_lb_description = module.project_variables.main_octavia_lb_description
octavia_lb_hostname = "octavia-main-lb"
}

View File

@ -0,0 +1,183 @@
# Main load balancer. L4, backed by Octavia with OVN driver
# OVN driver is simpler and more lightweight than amphora:
# - No amphora VMs needed
# - Uses the main subnet directly
# - Lower overhead and faster provisioning
resource "openstack_lb_loadbalancer_v2" "main_lb" {
vip_subnet_id = local.main_private_subnet_id
name = local.octavia_lb_name
description = local.octavia_lb_description
vip_address = local.basic_services_ip.octavia_main
loadbalancer_provider = "ovn"
}
# Allocate a floating IP
resource "openstack_networking_floatingip_v2" "main_lb_ip" {
pool = local.floating_ip_pools.main_public_ip_pool
description = local.octavia_lb_description
}
resource "openstack_networking_floatingip_associate_v2" "main_lb" {
floating_ip = openstack_networking_floatingip_v2.main_lb_ip.address
port_id = openstack_lb_loadbalancer_v2.main_lb.vip_port_id
}
locals {
lb_recordset_name = "${local.octavia_lb_hostname}.${local.dns_zone.name}"
}
resource "openstack_dns_recordset_v2" "main_lb_dns_recordset" {
zone_id = local.dns_zone_id
name = local.lb_recordset_name
description = "Public IP address of the main Octavia load balancer"
ttl = 8600
type = "A"
records = [openstack_networking_floatingip_v2.main_lb_ip.address]
}
# Main HAPROXY stats listener
resource "openstack_lb_listener_v2" "main_haproxy_stats_listener" {
loadbalancer_id = openstack_lb_loadbalancer_v2.main_lb.id
protocol = "TCP"
protocol_port = 8880
description = "Listener for the stats of the main HAPROXY instances"
name = "main_haproxy_stats_listener"
allowed_cidrs = [local.ssh_sources.d4s_vpn_1_cidr, local.ssh_sources.d4s_vpn_2_cidr, local.ssh_sources.s2i2s_vpn_1_cidr, local.ssh_sources.s2i2s_vpn_2_cidr]
}
resource "openstack_lb_pool_v2" "main_haproxy_stats_pool" {
listener_id = openstack_lb_listener_v2.main_haproxy_stats_listener.id
protocol = "TCP"
lb_method = "LEAST_CONNECTIONS"
name = "main-haproxy-lb-stats"
description = "Pool for the stats of the main HAPROXY instances"
persistence {
type = "SOURCE_IP"
}
}
resource "openstack_lb_members_v2" "main_haproxy_stats_pool_members" {
pool_id = openstack_lb_pool_v2.main_haproxy_stats_pool.id
member {
name = "haproxy l7 1"
address = local.basic_services_ip.haproxy_l7_1
protocol_port = 8880
}
member {
name = "haproxy l7 2"
address = local.basic_services_ip.haproxy_l7_2
protocol_port = 8880
}
}
resource "openstack_lb_monitor_v2" "main_haproxy_stats_monitor" {
pool_id = openstack_lb_pool_v2.main_haproxy_stats_pool.id
name = "main_haproxy_stats_monitor"
type = "TCP"
delay = 20
timeout = 5
max_retries = 3
admin_state_up = true
}
# Main HAPROXY HTTP
resource "openstack_lb_listener_v2" "main_haproxy_http_listener" {
loadbalancer_id = openstack_lb_loadbalancer_v2.main_lb.id
protocol = "TCP"
protocol_port = 80
description = "HTTP listener of the main HAPROXY instances"
name = "main_haproxy_http_listener"
admin_state_up = true
}
resource "openstack_lb_pool_v2" "main_haproxy_http_pool" {
listener_id = openstack_lb_listener_v2.main_haproxy_http_listener.id
protocol = "TCP"
lb_method = "LEAST_CONNECTIONS"
name = "main-haproxy-lb-http"
description = "Pool for the HTTP listener of the main HAPROXY instances"
persistence {
type = "SOURCE_IP"
}
admin_state_up = true
}
resource "openstack_lb_members_v2" "main_haproxy_http_pool_members" {
pool_id = openstack_lb_pool_v2.main_haproxy_http_pool.id
member {
name = "haproxy l7 1"
address = local.basic_services_ip.haproxy_l7_1
protocol_port = 80
}
member {
name = "haproxy l7 2"
address = local.basic_services_ip.haproxy_l7_2
protocol_port = 80
}
}
resource "openstack_lb_monitor_v2" "main_haproxy_http_monitor" {
pool_id = openstack_lb_pool_v2.main_haproxy_http_pool.id
name = "main_haproxy_http_monitor"
type = "HTTP"
http_method = "GET"
url_path = "/_haproxy_health_check"
expected_codes = "200"
delay = 20
timeout = 5
max_retries = 3
admin_state_up = true
}
# Main HAPROXY HTTPS
resource "openstack_lb_listener_v2" "main_haproxy_https_listener" {
loadbalancer_id = openstack_lb_loadbalancer_v2.main_lb.id
protocol = "TCP"
protocol_port = 443
description = "HTTPS listener of the main HAPROXY instances"
name = "main_haproxy_https_listener"
timeout_client_data = 3600000
timeout_member_connect = 10000
timeout_member_data = 7200000
admin_state_up = true
}
resource "openstack_lb_pool_v2" "main_haproxy_https_pool" {
listener_id = openstack_lb_listener_v2.main_haproxy_https_listener.id
protocol = "TCP"
lb_method = "LEAST_CONNECTIONS"
name = "main-haproxy-lb-https"
description = "Pool for the HTTPS listener of the main HAPROXY instances"
persistence {
type = "SOURCE_IP"
}
admin_state_up = true
}
resource "openstack_lb_members_v2" "main_haproxy_https_pool_members" {
pool_id = openstack_lb_pool_v2.main_haproxy_https_pool.id
member {
name = "haproxy l7 1"
address = local.basic_services_ip.haproxy_l7_1
protocol_port = 443
}
member {
name = "haproxy l7 2"
address = local.basic_services_ip.haproxy_l7_2
protocol_port = 443
}
}
resource "openstack_lb_monitor_v2" "main_haproxy_https_monitor" {
pool_id = openstack_lb_pool_v2.main_haproxy_https_pool.id
name = "main_haproxy_https_monitor"
type = "HTTPS"
http_method = "GET"
url_path = "/_haproxy_health_check"
expected_codes = "200"
delay = 20
timeout = 5
max_retries = 3
admin_state_up = true
}

View File

@ -0,0 +1,179 @@
# Security groups outputs
output "default_security_group" {
value = openstack_networking_secgroup_v2.default
}
output "default_security_group_id" {
value = openstack_networking_secgroup_v2.default.id
}
output "default_security_group_name" {
value = openstack_networking_secgroup_v2.default.name
}
output "access_to_the_jump_proxy" {
value = openstack_networking_secgroup_v2.access_to_the_jump_proxy
}
output "debugging" {
value = openstack_networking_secgroup_v2.debugging
}
output "traffic_from_main_haproxy" {
value = openstack_networking_secgroup_v2.traffic_from_main_haproxy
}
output "public_web" {
value = openstack_networking_secgroup_v2.public_web
}
output "restricted_web" {
value = openstack_networking_secgroup_v2.restricted_web
}
# SSH Jump Proxy outputs
output "ssh_jump_proxy_id" {
value = openstack_compute_instance_v2.ssh_jump_proxy.id
}
output "ssh_jump_proxy_public_ip" {
value = openstack_networking_floatingip_v2.ssh_jump_proxy_ip.address
}
output "ssh_jump_proxy_hostname" {
value = openstack_dns_recordset_v2.ssh_jump_proxy_recordset.name
}
# Internal CA outputs
output "internal_ca_id" {
value = openstack_compute_instance_v2.internal_ca.id
}
# HAProxy L7 outputs
output "main_haproxy_l7_ids" {
description = "IDs of the HAProxy L7 instances"
value = openstack_compute_instance_v2.main_haproxy_l7[*].id
}
output "main_lb_to_haproxy_l7_security_group" {
value = openstack_networking_secgroup_v2.main_lb_to_haproxy_l7
}
# Prometheus outputs
output "prometheus_server_id" {
value = openstack_compute_instance_v2.prometheus_server.id
}
output "prometheus_public_ip" {
value = openstack_networking_floatingip_v2.prometheus_server_ip.address
}
output "prometheus_hostname" {
value = openstack_dns_recordset_v2.prometheus_server_recordset.name
}
output "prometheus_access_from_grafana" {
value = openstack_networking_secgroup_v2.prometheus_access_from_grafana
}
output "haproxy_l7_data" {
value = local.haproxy_l7_data
}
output "prometheus_server_data" {
value = local.prometheus_server_data
}
# Octavia / Load balancer outputs
output "main_loadbalancer_id" {
description = "Main Load balancer ID"
value = openstack_lb_loadbalancer_v2.main_lb.id
}
output "main_loadbalancer_ip" {
description = "Main Load balancer VIP address"
value = openstack_lb_loadbalancer_v2.main_lb.vip_address
}
output "main_loadbalancer_public_ip" {
description = "Main Load balancer floating IP address"
value = openstack_networking_floatingip_v2.main_lb_ip.address
}
output "main_loadbalancer_hostname" {
description = "Main Load balancer DNS hostname"
value = openstack_dns_recordset_v2.main_lb_dns_recordset.name
}
# Re-export common variables for dependent modules
output "dns_zone" {
value = local.dns_zone
}
output "dns_zone_id" {
value = local.dns_zone_id
}
output "main_private_network" {
value = local.main_private_network
}
output "main_private_subnet" {
value = local.main_private_subnet
}
output "main_subnet_network_id" {
value = local.main_private_subnet_id
}
output "basic_services_ip" {
value = local.basic_services_ip
}
output "main_haproxy_l7_ip" {
value = local.main_haproxy_l7_ip
}
output "ssh_sources" {
value = local.ssh_sources
}
output "floating_ip_pools" {
value = local.floating_ip_pools
}
output "ssh_jump_proxy" {
value = local.ssh_jump_proxy
}
output "internal_ca_data" {
value = local.internal_ca_data
}
output "ubuntu_2204" {
value = local.ubuntu_2204
}
output "availability_zones_names" {
value = local.availability_zones_names
}
output "ubuntu2204_data_file" {
value = local.ubuntu2204_data_file
}
output "mtu_size" {
value = local.mtu_size
}
output "main_region" {
value = local.main_region
}
output "resolvers_ip" {
value = local.resolvers_ip
}
output "os_project_data" {
value = local.os_project_data
}

View File

@ -0,0 +1,95 @@
# Prometheus server with data volume and floating IP
# Data volume for Prometheus (SSD)
resource "openstack_blockstorage_volume_v3" "prometheus_data_vol" {
name = local.prometheus_server_data.vol_data_name
size = local.prometheus_server_data.vol_data_size
volume_type = "CephSSD"
enable_online_resize = true
}
# Port in the main private network
resource "openstack_networking_port_v2" "prometheus_server_port" {
name = "${local.prometheus_server_data.name}-port"
admin_state_up = true
network_id = local.main_private_network_id
security_group_ids = [
openstack_networking_secgroup_v2.default.id,
openstack_networking_secgroup_v2.restricted_web.id,
openstack_networking_secgroup_v2.prometheus_access_from_grafana.id
]
fixed_ip {
subnet_id = local.main_private_subnet_id
ip_address = local.basic_services_ip.prometheus
}
}
# Prometheus server instance
resource "openstack_compute_instance_v2" "prometheus_server" {
name = local.prometheus_server_data.name
availability_zone_hints = local.availability_zones_names.availability_zone_no_gpu
flavor_name = local.prometheus_server_data.flavor
key_pair = module.ssh_settings.ssh_key_name
block_device {
uuid = local.ubuntu_2404.uuid
source_type = "image"
volume_size = 10
boot_index = 0
destination_type = "volume"
delete_on_termination = false
}
network {
port = openstack_networking_port_v2.prometheus_server_port.id
}
user_data = file("${local.ubuntu2404_data_file}")
# Do not replace the instance when the ssh key changes
lifecycle {
ignore_changes = [
key_pair, user_data, network
]
}
}
# Attach data volume to Prometheus server
resource "openstack_compute_volume_attach_v2" "prometheus_data_attach_vol" {
instance_id = openstack_compute_instance_v2.prometheus_server.id
volume_id = openstack_blockstorage_volume_v3.prometheus_data_vol.id
device = local.prometheus_server_data.vol_data_device
}
# Floating IP and DNS record
resource "openstack_networking_floatingip_v2" "prometheus_server_ip" {
pool = local.floating_ip_pools.main_public_ip_pool
description = "Prometheus server"
}
resource "openstack_networking_floatingip_associate_v2" "prometheus_server" {
floating_ip = openstack_networking_floatingip_v2.prometheus_server_ip.address
port_id = openstack_networking_port_v2.prometheus_server_port.id
}
locals {
prometheus_recordset_name = "${local.prometheus_server_data.name}.${local.dns_zone.name}"
alertmanager_recordset_name = "alertmanager.${local.dns_zone.name}"
}
resource "openstack_dns_recordset_v2" "prometheus_server_recordset" {
zone_id = local.dns_zone_id
name = local.prometheus_recordset_name
description = "Public IP address of the Prometheus server"
ttl = 8600
type = "A"
records = [openstack_networking_floatingip_v2.prometheus_server_ip.address]
}
resource "openstack_dns_recordset_v2" "alertmanager_server_recordset" {
zone_id = local.dns_zone_id
name = local.alertmanager_recordset_name
description = "Prometheus alertmanager"
ttl = 8600
type = "CNAME"
records = [local.prometheus_recordset_name]
}

View File

@ -0,0 +1,14 @@
# Define required providers
terraform {
required_version = ">= 0.14.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 1.53.0"
}
}
}
provider "openstack" {
cloud = "ISTI-Cloud"
}

View File

@ -0,0 +1,374 @@
#
# Default security group - should be added to every instance
resource "openstack_networking_secgroup_v2" "default" {
name = local.default_security_group_name
delete_default_rules = "true"
description = "Default security group with rules for ssh access via jump proxy, prometheus scraping"
}
resource "openstack_networking_secgroup_rule_v2" "egress-ipv4" {
security_group_id = openstack_networking_secgroup_v2.default.id
direction = "egress"
ethertype = "IPv4"
}
resource "openstack_networking_secgroup_rule_v2" "ingress-icmp" {
security_group_id = openstack_networking_secgroup_v2.default.id
description = "Allow ICMP from remote"
direction = "ingress"
ethertype = "IPv4"
remote_ip_prefix = "0.0.0.0/0"
protocol = "icmp"
}
resource "openstack_networking_secgroup_rule_v2" "ssh-jump-proxy" {
security_group_id = openstack_networking_secgroup_v2.default.id
description = "SSH traffic from the jump proxy"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = local.basic_services_ip.ssh_jump_cidr
}
resource "openstack_networking_secgroup_rule_v2" "prometheus-node" {
security_group_id = openstack_networking_secgroup_v2.default.id
description = "Prometheus access to the node exporter"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 9100
port_range_max = 9100
remote_ip_prefix = local.basic_services_ip.prometheus_cidr
}
#
# SSH access to the jump proxy. Used by the jump proxy VM only
resource "openstack_networking_secgroup_v2" "access_to_the_jump_proxy" {
name = "ssh_access_to_the_jump_node"
delete_default_rules = "true"
description = "Security group that allows SSH access to the jump node from a limited set of sources"
}
resource "openstack_networking_secgroup_rule_v2" "ssh-s2i2s-vpn-1" {
security_group_id = openstack_networking_secgroup_v2.access_to_the_jump_proxy.id
description = "SSH traffic from S2I2S VPN 1"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = local.ssh_sources.s2i2s_vpn_1_cidr
}
resource "openstack_networking_secgroup_rule_v2" "ssh-s2i2s-vpn-2" {
security_group_id = openstack_networking_secgroup_v2.access_to_the_jump_proxy.id
description = "SSH traffic from S2I2S VPN 2"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = local.ssh_sources.s2i2s_vpn_2_cidr
}
resource "openstack_networking_secgroup_rule_v2" "ssh-d4s-vpn-1" {
security_group_id = openstack_networking_secgroup_v2.access_to_the_jump_proxy.id
description = "SSH traffic from D4Science VPN 1"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = local.ssh_sources.d4s_vpn_1_cidr
}
resource "openstack_networking_secgroup_rule_v2" "ssh-d4s-vpn-2" {
security_group_id = openstack_networking_secgroup_v2.access_to_the_jump_proxy.id
description = "SSH traffic from D4Science VPN 2"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = local.ssh_sources.d4s_vpn_2_cidr
}
resource "openstack_networking_secgroup_rule_v2" "ssh-shell-d4s" {
security_group_id = openstack_networking_secgroup_v2.access_to_the_jump_proxy.id
description = "SSH traffic from shell.d4science.org"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = local.ssh_sources.shell_d4s_cidr
}
resource "openstack_networking_secgroup_rule_v2" "ssh-infrascience-net" {
security_group_id = openstack_networking_secgroup_v2.access_to_the_jump_proxy.id
description = "SSH traffic from the InfraScience network"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = local.ssh_sources.infrascience_net_cidr
}
# Debug via tunnel from the jump proxy node
resource "openstack_networking_secgroup_v2" "debugging" {
name = "debugging_from_jump_node"
delete_default_rules = "true"
description = "Security group that allows web app debugging via tunnel from the ssh jump node"
}
resource "openstack_networking_secgroup_rule_v2" "shell_8100" {
security_group_id = openstack_networking_secgroup_v2.debugging.id
description = "Tomcat debug on port 8100 from the shell jump proxy"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 8100
port_range_max = 8100
remote_ip_prefix = local.basic_services_ip.ssh_jump_cidr
}
resource "openstack_networking_secgroup_rule_v2" "shell_80" {
security_group_id = openstack_networking_secgroup_v2.debugging.id
description = "http debug port 80 from the shell jump proxy"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 80
port_range_max = 80
remote_ip_prefix = local.basic_services_ip.ssh_jump_cidr
}
resource "openstack_networking_secgroup_rule_v2" "shell_443" {
security_group_id = openstack_networking_secgroup_v2.debugging.id
description = "https debug port 443 from the shell jump proxy"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = local.basic_services_ip.ssh_jump_cidr
}
# Traffic from the main HAPROXY load balancers
# Use on the web services that are exposed through the main HAPROXY
resource "openstack_networking_secgroup_v2" "traffic_from_main_haproxy" {
name = "traffic_from_the_main_load_balancers"
delete_default_rules = "true"
description = "Allow traffic from the main L7 HAPROXY load balancers"
}
resource "openstack_networking_secgroup_rule_v2" "haproxy-l7-1-80" {
security_group_id = openstack_networking_secgroup_v2.traffic_from_main_haproxy.id
description = "HTTP traffic from HAPROXY L7 1"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 80
port_range_max = 80
remote_ip_prefix = local.basic_services_ip.haproxy_l7_1_cidr
}
resource "openstack_networking_secgroup_rule_v2" "haproxy-l7-2-80" {
security_group_id = openstack_networking_secgroup_v2.traffic_from_main_haproxy.id
description = "HTTP traffic from HAPROXY L7 2"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 80
port_range_max = 80
remote_ip_prefix = local.basic_services_ip.haproxy_l7_2_cidr
}
resource "openstack_networking_secgroup_rule_v2" "haproxy-l7-1-443" {
security_group_id = openstack_networking_secgroup_v2.traffic_from_main_haproxy.id
description = "HTTPS traffic from HAPROXY L7 1"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = local.basic_services_ip.haproxy_l7_1_cidr
}
resource "openstack_networking_secgroup_rule_v2" "haproxy-l7-2-443" {
security_group_id = openstack_networking_secgroup_v2.traffic_from_main_haproxy.id
description = "HTTPS traffic from HAPROXY L7 2"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = local.basic_services_ip.haproxy_l7_2_cidr
}
resource "openstack_networking_secgroup_rule_v2" "haproxy-l7-1-8080" {
security_group_id = openstack_networking_secgroup_v2.traffic_from_main_haproxy.id
description = "HTTP traffic from HAPROXY L7 1"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 8080
port_range_max = 8080
remote_ip_prefix = local.basic_services_ip.haproxy_l7_1_cidr
}
resource "openstack_networking_secgroup_rule_v2" "haproxy-l7-2-8080" {
security_group_id = openstack_networking_secgroup_v2.traffic_from_main_haproxy.id
description = "HTTP traffic from HAPROXY L7 2"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 8080
port_range_max = 8080
remote_ip_prefix = local.basic_services_ip.haproxy_l7_2_cidr
}
resource "openstack_networking_secgroup_rule_v2" "haproxy-l7-1-8888" {
security_group_id = openstack_networking_secgroup_v2.traffic_from_main_haproxy.id
description = "HTTP traffic from HAPROXY L7 1"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 8888
port_range_max = 8888
remote_ip_prefix = local.basic_services_ip.haproxy_l7_1_cidr
}
resource "openstack_networking_secgroup_rule_v2" "haproxy-l7-2-8888" {
security_group_id = openstack_networking_secgroup_v2.traffic_from_main_haproxy.id
description = "HTTP traffic from HAPROXY L7 2"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 8888
port_range_max = 8888
remote_ip_prefix = local.basic_services_ip.haproxy_l7_2_cidr
}
# Security group that exposes web services directly. A floating IP is required.
resource "openstack_networking_secgroup_v2" "public_web" {
name = "public_web_service"
delete_default_rules = "true"
description = "Security group that allows HTTPS and HTTP from everywhere, for the services that are not behind any load balancer"
}
resource "openstack_networking_secgroup_rule_v2" "public_http" {
security_group_id = openstack_networking_secgroup_v2.public_web.id
description = "Allow HTTP from everywhere"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 80
port_range_max = 80
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "public_https" {
security_group_id = openstack_networking_secgroup_v2.public_web.id
description = "Allow HTTPS from everywhere"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = "0.0.0.0/0"
}
# HTTP and HTTPS access through the VPN nodes. Floating IP is required
resource "openstack_networking_secgroup_v2" "restricted_web" {
name = "restricted_web_service"
delete_default_rules = "true"
description = "Security group that restricts HTTPS sources to the VPN nodes and shell.d4science.org. HTTP is open to all, because letsencrypt"
}
resource "openstack_networking_secgroup_rule_v2" "http_from_everywhere" {
security_group_id = openstack_networking_secgroup_v2.restricted_web.id
description = "Allow HTTP from everywhere"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 80
port_range_max = 80
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "https_from_d4s_vpn_1" {
security_group_id = openstack_networking_secgroup_v2.restricted_web.id
description = "Allow HTTPS from D4Science VPN 1"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = local.ssh_sources.d4s_vpn_1_cidr
}
resource "openstack_networking_secgroup_rule_v2" "https_from_d4s_vpn_2" {
security_group_id = openstack_networking_secgroup_v2.restricted_web.id
description = "Allow HTTPS from D4Science VPN 2"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = local.ssh_sources.d4s_vpn_2_cidr
}
resource "openstack_networking_secgroup_rule_v2" "https_from_s2i2s_vpn_1" {
security_group_id = openstack_networking_secgroup_v2.restricted_web.id
description = "Allow HTTPS from S2I2S VPN 1"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = local.ssh_sources.s2i2s_vpn_1_cidr
}
resource "openstack_networking_secgroup_rule_v2" "https_from_s2i2s_vpn_2" {
security_group_id = openstack_networking_secgroup_v2.restricted_web.id
description = "Allow HTTPS from S2I2S VPN 2"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = local.ssh_sources.s2i2s_vpn_2_cidr
}
resource "openstack_networking_secgroup_rule_v2" "https_from_shell_d4s" {
security_group_id = openstack_networking_secgroup_v2.restricted_web.id
description = "Allow HTTPS from shell.d4science.org"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = local.ssh_sources.shell_d4s_cidr
}
# Prometheus access from public Grafana server
resource "openstack_networking_secgroup_v2" "prometheus_access_from_grafana" {
name = "prometheus_access_from_grafana"
delete_default_rules = "true"
description = "The public grafana server must be able to get data from Prometheus"
}
resource "openstack_networking_secgroup_rule_v2" "grafana_d4s" {
security_group_id = openstack_networking_secgroup_v2.prometheus_access_from_grafana.id
description = "Allow HTTPS from grafana.d4science.org"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = local.prometheus_server_data.public_grafana_server_cidr
}

View File

@ -0,0 +1,68 @@
# VM used as jump proxy. A floating IP is required
# Port in the main private network
resource "openstack_networking_port_v2" "ssh_jump_proxy_port" {
name = "${local.ssh_jump_proxy.name}-port"
admin_state_up = true
network_id = local.main_private_network_id
security_group_ids = [
openstack_networking_secgroup_v2.default.id,
openstack_networking_secgroup_v2.access_to_the_jump_proxy.id
]
fixed_ip {
subnet_id = local.main_private_subnet_id
ip_address = local.basic_services_ip.ssh_jump
}
}
resource "openstack_compute_instance_v2" "ssh_jump_proxy" {
name = local.ssh_jump_proxy.name
availability_zone_hints = local.availability_zones_names.availability_zone_no_gpu
flavor_name = local.ssh_jump_proxy.flavor
key_pair = module.ssh_settings.ssh_key_name
block_device {
uuid = local.ubuntu_2404.uuid
source_type = "image"
volume_size = 30
boot_index = 0
destination_type = "volume"
delete_on_termination = false
}
network {
port = openstack_networking_port_v2.ssh_jump_proxy_port.id
}
user_data = file("${local.ubuntu2404_data_file}")
# Do not replace the instance when the ssh key changes
lifecycle {
ignore_changes = [
key_pair, user_data, network
]
}
}
# Floating IP and DNS record
resource "openstack_networking_floatingip_v2" "ssh_jump_proxy_ip" {
pool = local.floating_ip_pools.main_public_ip_pool
description = "SSH Proxy Jump Server"
}
resource "openstack_networking_floatingip_associate_v2" "ssh_jump_proxy" {
floating_ip = openstack_networking_floatingip_v2.ssh_jump_proxy_ip.address
port_id = openstack_networking_port_v2.ssh_jump_proxy_port.id
}
locals {
ssh_recordset_name = "${local.ssh_jump_proxy.name}.${local.dns_zone.name}"
}
resource "openstack_dns_recordset_v2" "ssh_jump_proxy_recordset" {
zone_id = local.dns_zone_id
name = local.ssh_recordset_name
description = "Public IP address of the SSH Proxy Jump server"
ttl = 8600
type = "A"
records = [openstack_networking_floatingip_v2.ssh_jump_proxy_ip.address]
}

View File

View File

@ -29,3 +29,11 @@ output "main_octavia_lb_name" {
output "main_octavia_lb_description" { output "main_octavia_lb_description" {
value = var.main_octavia_lb_description value = var.main_octavia_lb_description
} }
output "haproxy_l7_data" {
value = var.haproxy_l7_data
}
output "prometheus_server_data" {
value = var.prometheus_server_data
}

View File

@ -75,3 +75,24 @@ variable "main_octavia_lb_name" {
variable "main_octavia_lb_description" { variable "main_octavia_lb_description" {
default = "Main L4 load balancer for the S2I2S services" default = "Main L4 load balancer for the S2I2S services"
} }
variable "haproxy_l7_data" {
type = map(string)
default = {
name = "main-haproxy-l7"
flavor = "m1.medium"
vm_count = "2"
}
}
variable "prometheus_server_data" {
type = map(string)
default = {
name = "prometheus"
flavor = "m1.medium"
vol_data_name = "prometheus-data"
vol_data_size = "100"
vol_data_device = "/dev/vdb"
public_grafana_server_cidr = "146.48.28.103/32"
}
}