openstack-infrastructure-te.../s2i2s/project-setup/README.md

254 lines
8.5 KiB
Markdown

# S2I2S Project Setup
This Terraform configuration sets up the core infrastructure components for the S2I2S OpenStack project.
## Overview
The project-setup module creates the following resources:
### Virtual Machines
| VM | Purpose | Flavor | Floating IP | DNS Record |
| --- | --- | --- | --- | --- |
| SSH Jump Proxy | Secure SSH gateway for accessing internal VMs | m1.small | Yes | `ssh-jump.s2i2s.cloud.isti.cnr.it` |
| Internal CA | Certificate Authority for internal services | m1.small | No | - |
| HAProxy L7 (x2) | Layer 7 load balancers behind Octavia | m1.medium | No | - |
| Prometheus | Monitoring and metrics collection | m1.medium | Yes | `prometheus.s2i2s.cloud.isti.cnr.it` |
All VMs run Ubuntu 24.04 and are provisioned with the standard cloud-init user data script.
### Load Balancer
An OVN-based Octavia load balancer (`s2i2s-cloud-l4-load-balancer`) provides L4 load balancing:
- **Floating IP**: Yes
- **DNS Record**: `octavia-main-lb.s2i2s.cloud.isti.cnr.it`
- **Backend**: HAProxy L7 instances (anti-affinity for HA)
| Listener | Port | Protocol | Health Check |
| --- | --- | --- | --- |
| HTTP | 80 | TCP | HTTP GET `/_haproxy_health_check` |
| HTTPS | 443 | TCP | HTTPS GET `/_haproxy_health_check` |
| Stats | 8880 | TCP | TCP connect |
### Security Groups
| Security Group | Purpose | Use On |
| --- | --- | --- |
| `s2i2s-default-sg` | Default rules: SSH via jump proxy, ICMP, Prometheus node exporter | All VMs |
| `ssh_access_to_the_jump_node` | SSH access from VPN endpoints | SSH Jump Proxy only |
| `debugging_from_jump_node` | Web debugging via SSH tunnels (ports 80, 443, 8100) | VMs needing debug access |
| `traffic_from_the_main_load_balancers` | HTTP/HTTPS from HAProxy L7 (ports 80, 443, 8080, 8888) | Backend web services |
| `traffic_from_main_lb_to_haproxy_l7` | Traffic from Octavia LB to HAProxy | HAProxy L7 VMs |
| `public_web_service` | HTTP/HTTPS from anywhere | Public-facing services with floating IP |
| `restricted_web_service` | HTTP from anywhere, HTTPS from VPNs only | Restricted services with floating IP |
| `prometheus_access_from_grafana` | HTTPS access from public Grafana server | Prometheus VM |
### Storage
- **Prometheus Data Volume**: 100 GB SSD (CephSSD) with online resize enabled
## Architecture
```text
Internet
|
+-------------------+-------------------+
| | |
[SSH Jump Proxy] [Octavia LB] [Prometheus]
| (Floating IP) (Floating IP)
| |
| +-------+-------+
| | |
| [HAProxy L7-01] [HAProxy L7-02]
| | |
| +-------+-------+
| |
+-------------------+
|
[Internal Network]
|
+-------+-------+
| |
[Internal CA] [Backend VMs]
```
## Prerequisites
1. The `main_net_dns_router` configuration must be applied first (creates network, subnet, DNS zone)
2. SSH key must be configured in the OpenStack project
3. OpenStack credentials must be configured (via `clouds.yaml` or environment variables)
## Usage
```bash
# Initialize Terraform
terraform init
# Review the plan
terraform plan
# Apply the configuration
terraform apply
```
## SSH Jump Proxy Configuration
To access VMs in the S2I2S cloud, you must use the SSH jump proxy. Add the following configuration to your `~/.ssh/config` file:
```ssh-config
# S2I2S SSH Jump Proxy
# Replace <your_username> with your actual username
Host s2i2s-jump
HostName ssh-jump.s2i2s.cloud.isti.cnr.it
User <your_username>
IdentityFile ~/.ssh/your_private_key
ForwardAgent yes
# Keep connection alive
ServerAliveInterval 60
ServerAliveCountMax 3
# Pattern match for all S2I2S internal hosts by IP
# Matches any IP in the 10.10.0.x range
# Usage: ssh 10.10.0.10
Host 10.10.0.*
User <your_username>
ForwardAgent yes
ProxyJump <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
# Alternative: named aliases for specific internal hosts
Host s2i2s-prometheus
HostName 10.10.0.10
User <your_username>
ForwardAgent yes
ProxyJump <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
Host s2i2s-ca
HostName 10.10.0.4
User <your_username>
ForwardAgent yes
ProxyJump <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
Host s2i2s-haproxy-01
HostName 10.10.0.11
User <your_username>
ForwardAgent yes
ProxyJump <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
Host s2i2s-haproxy-02
HostName 10.10.0.12
User <your_username>
ForwardAgent yes
ProxyJump <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
```
### SSH Usage Examples
```bash
# Connect to the jump proxy directly
ssh s2i2s-jump
# Connect to an internal VM by IP (using pattern match from ssh config)
ssh 10.10.0.10
# Connect to a named internal host (if configured in ssh config)
ssh s2i2s-prometheus
# Connect without ssh config (replace <your_username>)
ssh -J <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it <your_username>@10.10.0.10
# Copy a file to an internal VM
scp -J <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it localfile.txt <your_username>@10.10.0.10:/tmp/
# Forward a local port to an internal service
ssh -L 8080:10.10.0.30:80 s2i2s-jump
# Create a SOCKS proxy through the jump host
ssh -D 1080 s2i2s-jump
# Then configure your browser to use SOCKS5 proxy at localhost:1080
```
### SSH Debugging via Tunnel
For debugging web applications on internal VMs, you can create SSH tunnels:
```bash
# Forward local port 8100 to a Tomcat debug port on internal VM
# (requires s2i2s-jump defined in ssh config)
ssh -L 8100:10.10.0.50:8100 s2i2s-jump
# Forward local port 8080 to HTTP on internal VM
ssh -L 8080:10.10.0.50:80 s2i2s-jump
# Forward local port 8443 to HTTPS on internal VM
ssh -L 8443:10.10.0.50:443 s2i2s-jump
# Without ssh config (replace <your_username>)
ssh -L 8080:10.10.0.50:80 <your_username>@ssh-jump.s2i2s.cloud.isti.cnr.it
```
## Outputs
The module exports the following outputs for use by other Terraform configurations:
### VM IDs and IPs
- `ssh_jump_proxy_id`, `ssh_jump_proxy_public_ip`, `ssh_jump_proxy_hostname`
- `internal_ca_id`
- `main_haproxy_l7_ids`
- `prometheus_server_id`, `prometheus_public_ip`, `prometheus_hostname`
### Load Balancer Outputs
- `main_loadbalancer_id`, `main_loadbalancer_ip`, `main_loadbalancer_public_ip`, `main_loadbalancer_hostname`
### Security Group Outputs
- `default_security_group`, `default_security_group_id`, `default_security_group_name`
- `access_to_the_jump_proxy`
- `debugging`
- `traffic_from_main_haproxy`
- `public_web`
- `restricted_web`
- `main_lb_to_haproxy_l7_security_group`
- `prometheus_access_from_grafana`
### Network Outputs (re-exported from main_net_dns_router)
- `dns_zone`, `dns_zone_id`
- `main_private_network`, `main_private_subnet`, `main_subnet_network_id`
- `basic_services_ip`, `main_haproxy_l7_ip`
## File Structure
```text
project-setup/
├── provider.tf # OpenStack provider configuration
├── main.tf # Module references and local variables
├── security-groups.tf # All security group definitions
├── ssh-jump-proxy.tf # SSH jump proxy VM and floating IP
├── internal-ca.tf # Internal CA VM
├── haproxy.tf # HAProxy L7 VMs (pair with anti-affinity)
├── prometheus.tf # Prometheus VM with data volume
├── octavia.tf # OVN-based Octavia load balancer
├── outputs.tf # Output definitions
└── README.md # This file
```
## Dependencies
This module depends on:
- `../main_net_dns_router` - Network, subnet, router, and DNS zone
- `../variables` - Project-specific variables
- `../../modules/labs_common_variables` - Common variables (images, flavors, etc.)
- `../../modules/ssh-key-ref` - SSH key reference
## Notes
- The HAProxy L7 VMs are deployed with anti-affinity to ensure they run on different hypervisors
- All VMs use volume-backed boot disks with `delete_on_termination = false` for data persistence
- The Prometheus data volume uses CephSSD storage for better I/O performance
- Volumes have `enable_online_resize = true` for live resizing capability
- Security groups are designed to minimize attack surface while allowing necessary traffic flows