A guide to setup with terraform

What you should know: I am currently working for Orbit Cloud Solutions as Cloud Advisor, but any posts on this blog reflect my own views and opinions only.

Oracle and Microsoft announced a few weeks ago that their cloud offerings Oracle Cloud Infrastructure (OCI) and Microsofts Azure will have a stronger interconnection, thus making multi-cloud setups for enterprises a tad easier.

Important update: Seems that the interconnect offering was more in demand than expected. Now you need to get some approval for access to the ExpressRoute setup needed for the interconnect in Azure, so just fill out the survey and hope you get access granted from Microsoft.

There are guides for both Oracle Cloud (OCI) and Microsoft Azure that show the setup of an interconnection in a more or less detailed way, still each focuses on just one part of the setup. Since i still missed a more integrated description of how to get OCI and Azure interconnected, i decided to create the required configurations for terraform and write down an explanation of the required steps.

Getting Started

From a technical view the interconnection between the two clouds will not surprise many: Both for OCI and Azure there are services for connecting the cloud to another network via dedicated line. The most common use case for this is connecting an on-premises data center to the public cloud. Now the interconnection between OCI and Azure is just like that – one FastConnect circuit on the OCI side and one ExpressRoute circuit on the Azure side. No magic involved.

Therefore in theory all one has to do is to setup FastConnect and ExpressRoute, define proper routing and security lists and you are done. In the following documents i will show how to do the required setup using terraform.

Prerequisites

  • OCI account and a working setup for the terraform provider.
  • Azure account and a working setup for the terraform provider (i used a client secret).
  • SSH certificates for connecting to the VMs used for testing the setup.

Overview of the infrastructure setup

On the OCI side there will be a VCN (10.1.0.0/0) with one Subnet (10.1.0.0/16). In this subnet later two test VMs will be deployed. The VCN is connected to a FastConnect circuit to Azure using a Dynamic Routing Gateway (DRG).

On the Azure side there will be a VNET (10.2.0.0/0) with two Subnets – one Subnet (10.2.1.0/24) will eventually contain two test VMs and one Subnet (10.2.2.0/26) which is technically required for the Virtual

Network Gateway (VNG). Traffic routed to the VNG will go to an ExpressRoute circuit to OCI.

Environment Setup

For OCI i use some environment variables. Currently more and more regions are interconnected (e.g. London and Toronto), the example will focus on the original preview regions Ashburn/Washington DC.

export TF_VAR_oci_tenancy_ocid="ocid1.tenancy..."
export TF_VAR_oci_user_ocid="ocid1.user..."
export TF_VAR_oci_compartment_ocid="ocid1.compartment..."
export TF_VAR_oci_fingerprint=...
export TF_VAR_oci_private_key_path=...
export TF_VAR_oci_region=us-ashburn-1

For Azure it is pretty similar, only that here the only currently supported region is Washington (eastus).

export TF_VAR_arm_client_id="..."
export TF_VAR_arm_client_secret="..."
export TF_VAR_arm_tenant_id="..."
export TF_VAR_arm_subscription_id="..."
export TF_VAR_arm_region=eastus

Eventually there is some more general stuff.

export TF_VAR_SSH_PUBLIC_KEY=$(cat ~/.ssh/id_rsa.pub)

All these environment variables are defined for terraform to use.

Providers

Most of the variables are used for the terraform providers configuration.

provider "oci" {
  version          = ">= 3.0.0"
  tenancy_ocid     = "${var.oci_tenancy_ocid}"
  user_ocid        = "${var.oci_user_ocid}"
  fingerprint      = "${var.oci_fingerprint}"
  private_key_path = "${var.oci_private_key_path}"
  region           = "${var.oci_region}"
}

provider "azurerm" {
  version         = ">=1.28.0"
  subscription_id = "${var.arm_subscription_id}"
  client_id       = "${var.arm_client_id}"
  client_secret   = "${var.arm_client_secret}"
  tenant_id       = "${var.arm_tenant_id}"
}

Network for interconnect

The way the interconnect setup works is that first an ExpressRoute is set up in Azure. When it is ready you will receive a service key that then can be used in OCI to setup FastConnect. Therefore i will start with Azure.

Azure

For easier handling first we will create a new resource group in Washington region.

resource "azurerm_resource_group" "oci_connect" {
  name     = "oci_connect"
  location = "${var.arm_region}"
}

Then we create a new VNET for 10.2.0.0/16 and add a Subnet 10.2.1.0/24 which will contain the resources that we will actually use for testing,

resource "azurerm_virtual_network" "oci_connect_vnet" {
  name                = "oci-connect-network"
  resource_group_name = "${azurerm_resource_group.oci_connect.name}"
  location            = "${azurerm_resource_group.oci_connect.location}"
  address_space       = ["10.2.0.0/16"]
}

resource "azurerm_subnet" "oci_connect_subnet" {
  name                 = "oci-connect-subnet"
  resource_group_name  = "${azurerm_resource_group.oci_connect.name}"
  virtual_network_name = "${azurerm_virtual_network.oci_connect_vnet.name}"
  address_prefix       = "10.2.1.0/24"
}

Then we need another smaller Subnet 10.2.2.0/26 that will be used by the gateway and is required to be named GatewaySubnet.

resource "azurerm_subnet" "oci_subnet_gw" {
  name                 = "GatewaySubnet"
  resource_group_name  = "${azurerm_resource_group.oci_connect.name}"
  virtual_network_name = "${azurerm_virtual_network.oci_connect_vnet.name}"
  address_prefix       = "10.2.2.0/26"
}

Now we can create the Virtual Network Gateway that will be connected to the ExpressRoute circuit. For easier access later on we will first allocate a public ip address and associate that with the VNG.

resource "azurerm_public_ip" "oci_connect_vng_ip" {
  name                = "oci-connect-vng-ip"
  location            = "${azurerm_resource_group.oci_connect.location}"
  resource_group_name = "${azurerm_resource_group.oci_connect.name}"
  allocation_method   = "Dynamic"
}

Note that the Virtual Network Gateway is created with type ExpressRoute having BGP enabled.

resource "azurerm_virtual_network_gateway" "oci_connect_vng" {
  name                = "oci-connect-vng"
  location            = "${azurerm_resource_group.oci_connect.location}"
  resource_group_name = "${azurerm_resource_group.oci_connect.name}"
  type                = "ExpressRoute"
  enable_bgp          = true
  sku                 = "Standard"

  ip_configuration {
    private_ip_address_allocation = "Dynamic"
    subnet_id                     = "${azurerm_subnet.oci_subnet_gw.id}"
    public_ip_address_id          = "${azurerm_public_ip.oci_connect_vng_ip.id}"
  }
}

As you can see in the ip_configuration block the gateway will reside in the gateway subnet created before.

Next we can create the ExpressRoute circuit that the VNG will be connected to. Note that currently you will need to set service_provider_name to Oracle Cloud FastConnect with the peering_location you plan to use, here it is Washington DC. To avoid paying for egress traffic from Azure to OCI you should use the Local tier in your sku configuration.

resource "azurerm_express_route_circuit" "oci_connect_erc" {
  name                  = "oci-connect-expressroute"
  resource_group_name   = "${azurerm_resource_group.oci_connect.name}"
  location              = "${azurerm_resource_group.oci_connect.location}"
  service_provider_name = "Oracle Cloud FastConnect"
  peering_location      = "Washington DC"
  bandwidth_in_mbps     = 50

  sku {
    tier   = "Local"
    family = "MeteredData"
  }

  allow_classic_operations = false
}

For our testing configuration i will use the minimal bandwidth of 50 Mbps and a metered data SKU.

Now we need to connect the VNG to the ExpressRoute circuit created before.

resource "azurerm_virtual_network_gateway_connection" "oci_conn_vng_gw" {
  name                = "oci-connect-vng-gw"
  location            = "${azurerm_resource_group.oci_connect.location}"
  resource_group_name = "${azurerm_resource_group.oci_connect.name}"

  type                       = "ExpressRoute"
  virtual_network_gateway_id = "${azurerm_virtual_network_gateway.oci_connect_vng.id}"
  express_route_circuit_id   = "${azurerm_express_route_circuit.oci_connect_erc.id}"
}

Nearly done on the Azure side, we only need to set up a routing to use our VNG for traffic to Subnet 10.1.0.0/16 in OCI.

resource "azurerm_route_table" "oci_connect_route_table" {
  name                = "oci-connect-route-table"
  location            = "${azurerm_resource_group.oci_connect.location}"
  resource_group_name = "${azurerm_resource_group.oci_connect.name}"
}

resource "azurerm_route" "oci_connect_route" {
  name                = "oci-connect-route"
  resource_group_name = "${azurerm_resource_group.oci_connect.name}"
  route_table_name    = "${azurerm_route_table.oci_connect_route_table.name}"
  address_prefix      = "10.1.0.0/16"
  next_hop_type       = "VirtualNetworkGateway"
}

resource "azurerm_subnet_route_table_association" "oci_connect_route_subnet_association" {
  subnet_id      = "${azurerm_subnet.oci_connect_subnet.id}"
  route_table_id = "${azurerm_route_table.oci_connect_route_table.id}"
}

Last step in configuring Azure network is creating a new Network Security Group that will allow inbound traffic from the subnet on OCI.

resource "azurerm_network_security_group" "oci_connect_sg" {
  name                = "oci-connect-securitygroup"
  location            = "${azurerm_resource_group.oci_connect.location}"
  resource_group_name = "${azurerm_resource_group.oci_connect.name}"

  security_rule {
    name                       = "InboundAllOCI"
    priority                   = 100
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "*"
    source_port_range          = "*"
    destination_port_range     = "*"
    source_address_prefix      = "10.1.0.0/16"
    destination_address_prefix = "*"
  }

  security_rule {
    name                       = "OutboundAll"
    priority                   = 110
    direction                  = "Outbound"
    access                     = "Allow"
    protocol                   = "*"
    source_port_range          = "*"
    destination_port_range     = "*"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }
}
OCI

Now that we are done with Azure configuration we can start with the OCI counterpart. Just as before we will create a new VCN 10.1.0.0/16 containing a Subnet 10.1.1.0/24.

resource "oci_core_virtual_network" "az_connect_vcn" {
  cidr_block     = "10.1.0.0/16"
  dns_label      = "azconnectvcn"
  compartment_id = "${var.oci_compartment_ocid}"
  display_name   = "az-connect-vcn"
}

resource "oci_core_subnet" "az_connect_subnet" {
  cidr_block        = "10.1.1.0/24"
  compartment_id    = "${var.oci_compartment_ocid}"
  vcn_id            = "${oci_core_virtual_network.az_connect_vcn.id}"
  display_name      = "az-connect-subnet"
  security_list_ids = ["${oci_core_security_list.az_conn_security_list.id}"]
}

Note that in the configuration of the subnet a security list has been added. We will create this security list later.

And again we need a gateway to be used for traffic to be routed to Azure. In OCI this is a Dynamic Routing Gateway (DRG) that is attached to the VCN we just created.

resource "oci_core_drg" "az_connect_drg" {
  compartment_id = "${var.oci_compartment_ocid}"
  display_name   = "az-connect-drg"
}
resource "oci_core_drg_attachment" "az_conn_drg_attachment" {
  drg_id       = "${oci_core_drg.az_connect_drg.id}"
  vcn_id       = "${oci_core_virtual_network.az_connect_vcn.id}"
  display_name = "az-connect-drg-attachment"
}

Now we can create the FastConnect circuit. This needs a bit more explanation. In the first part we state that we will use a virtual circuit type PRIVATE with minimal bandwidth for testing – unfortunately here the minimum is 1Gbps.

resource "oci_core_virtual_circuit" "az_connect_virtual_circuit" {
  display_name         = "az-connect-virtual-circuit"
  compartment_id       = "${var.oci_compartment_ocid}"
  gateway_id           = "${oci_core_drg.az_connect_drg.id}"
  type                 = "PRIVATE"
  bandwidth_shape_name = "1 Gbps"

In the second part we add the information of the provider to connect to. For Azure there is a provider OCID we need to use, the required service key can be queried from the ExpressRoute circuit we just created before. (That´s why we started with Azure.)

  # provider service id for azure (asn 12076)
  provider_service_id       = "ocid1.providerservice.oc1.iad.aaaaaaaamdyta753fb6tshj3p2g5zezjwfoki5l46jcaaikxt3hszboiag4q"
  provider_service_key_name = "${azurerm_express_route_circuit.oci_connect_erc.service_key}"

As more regions keep getting added, you probably will want to use a different region than Ashburn/Washington DC for your interconnect. To find the OCID for the Azure provider_service_id you can query all FastConnect providers in your region using OCI CLI.

oci network fast-connect-provider-service list --all --region your_region --compartment-id your_compartment_ocid

And finally we need to define BGP peering IP addresses. The OCI-Azure interconnect will have redundant lines, thus we need two sets of CIDR blocks, i.e. 4 addresses in total. We use CIDR blocks that are not part of one of the address spaces we used before (10.1.0.0/16 and 10.2.0.0/16).

  cross_connect_mappings {
    oracle_bgp_peering_ip   = "10.99.0.201/30"
    customer_bgp_peering_ip = "10.99.0.202/30"
  }

  cross_connect_mappings {
    oracle_bgp_peering_ip   = "10.99.0.205/30"
    customer_bgp_peering_ip = "10.99.0.206/30"
  }
}

For accessing the resources residing in our subnet 10.1.1.0/24 not only from Azure but from public internet too, we need an Internet Gateway that is attached to the VCN.

resource "oci_core_internet_gateway" "oci_test_igw" {
  display_name   = "oci-test-internet-gateway"
  compartment_id = "${var.oci_compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.az_connect_vcn.id}"
}

Again, we need to add some routing for traffic going to the Azure VNET 10.2.0.0/16 and public internet (0.0.0.0/0).

resource "oci_core_route_table" "az_test_route_table" {
  display_name   = "az-test-route-table"
  compartment_id = "${var.oci_compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.az_connect_vcn.id}"

  route_rules {
    network_entity_id = "${oci_core_internet_gateway.oci_test_igw.id}"
    destination       = "0.0.0.0/0"
  }

  route_rules {
    network_entity_id = "${oci_core_drg.az_connect_drg.id}"
    destination       = "10.2.0.0/16"
  }
}

resource "oci_core_route_table_attachment" "az_connect_route_table_attachment" {
  subnet_id      = "${oci_core_subnet.az_connect_subnet.id}"
  route_table_id = "${oci_core_route_table.az_test_route_table.id}"
}

And finally we create a Security List to be used for subnet 10.1.1.0/24. This will allow incoming ICMP pings and SSH connections from Azure subnet and allow all outgoing traffic.

resource "oci_core_security_list" "az_conn_security_list" {
  compartment_id = "${var.oci_compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.az_connect_vcn.id}"
  display_name   = "az-connect-security-list"

  ingress_security_rules {
    source   = "10.2.0.0/16"
    protocol = "1"
  }

  ingress_security_rules {
    source   = "10.2.0.0/16"
    protocol = "6"
    tcp_options {
      min = "22"
      max = "22"
    }
  }

  egress_security_rules {
    destination   = "10.2.0.0/16"
    protocol = "all"
    }

Then we do the same for public internet (0.0.0.0/0).

  ingress_security_rules {
    source   = "0.0.0.0/0"
    protocol = "1"
  }

  ingress_security_rules {
    source   = "0.0.0.0/0"
    protocol = "6"
    tcp_options {
      min = "22"
      max = "22"
    }
  }

  egress_security_rules {
    destination   = "0.0.0.0/0"
    protocol = "all"
    }
}

Now we are done with the setup. If you run this with terraform you will get a working connection for the subnets on OCI and Azure. Note that it might take some time for terraform to finish as we are deploying dedicated lines here…

Testing the setup

For testing we can deploy VMs in the subnets 10.1.1.0/24 (OCI) and 10.2.1.0/24 (Azure) and check if we can connect from one cloud to the other.

So this is how the environment should look like once we are done with creating and connecting our testing VMs.

OCI

For OCI we can use some variables that define number of instances to create and which image to use. In the example we will use Oracle Linux.

variable "oci_test_image" {
# oel 7.6
  default = "ocid1.image.oc1.iad.aaaaaaaaj6pcmnh6y3hdi3ibyxhhflvp3mj2qad4nspojrnxc6pzgn2w3k5q"
}
variable "oci_test_instances_count" {
  default = 2
}

We then create 2 minimal instances using the public SSH key provided in the environment variable.

data "oci_identity_availability_domains" "az_connect_adcomp" {
  compartment_id = "${var.oci_compartment_ocid}"
}

resource "oci_core_instance" "az_connect_test_instance" {
  count = "${var.oci_test_instances_count}" 
  availability_domain = "${lookup(data.oci_identity_availability_domains.az_connect_adcomp.availability_domains[0],"name")}"
  compartment_id      = "${var.oci_compartment_ocid}"
  shape               = "VM.Standard2.1"

  create_vnic_details {
    subnet_id              = "${oci_core_subnet.az_connect_subnet.id}"
    skip_source_dest_check = true
  }

  display_name = "az-connect-test-instance-${count.index}"

  metadata {
    ssh_authorized_keys = "${var.ssh_public_key}"
  }
  source_details {
    source_id   = "${var.oci_test_image}"
    source_type = "image"
  }
  preserve_boot_volume = false
}

Once the VMs are deployed we can show both public and private IPs for later connecting to them.

output "oci_vm_private_ip" {
  value = ["${oci_core_instance.az_connect_test_instance.*.private_ip}"]
}
output "oci_vm_public_ip" {
  value = ["${oci_core_instance.az_connect_test_instance.*.public_ip}"]
}
Azure

For the Azure VMs we again define the number of instances to create.

variable "arm_test_instances_count" {
  default = 1
}

Setting up Azure instances is a bit more complicated. First we need to create virtual NICs which have public IPs assigned.

resource "azurerm_public_ip" "oci_test_ip" {
  count = "${var.arm_test_instances_count}"
  name                = "oci-test-ip-${count.index}"
  location            = "${azurerm_resource_group.oci_connect.location}"
  resource_group_name = "${azurerm_resource_group.oci_connect.name}"
  allocation_method   = "Dynamic"
}

resource "azurerm_network_interface" "oci_testvm_nic" {
  count = "${var.arm_test_instances_count}"
  name                = "oci-testvm-nic-${count.index}"
  location            = "${azurerm_resource_group.oci_connect.location}"
  resource_group_name = "${azurerm_resource_group.oci_connect.name}"

  ip_configuration {
    name                          = "oci-testvm-nic-config"
    subnet_id                     = "${azurerm_subnet.oci_connect_subnet.id}"
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = "${element(azurerm_public_ip.oci_test_ip.*.id, count.index)}"
  }
}

After that the VM instances can be added. Here we will use Ubuntu 18.04 with password authentication enabled. You probably will want to change the password from “Welcome-1234” to something more appropriate.

resource "azurerm_virtual_machine" "oci_testvm" {
  count = "${var.arm_test_instances_count}"
  name                  = "oci-testvm-${count.index}"
  location              = "${azurerm_resource_group.oci_connect.location}"
  resource_group_name   = "${azurerm_resource_group.oci_connect.name}"
  network_interface_ids = ["${element(azurerm_network_interface.oci_testvm_nic.*.id, count.index)}"]
  vm_size               = "Standard_DS1_v2"

  delete_os_disk_on_termination    = true
  delete_data_disks_on_termination = true

  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }

  storage_os_disk {
    name              = "myosdisk-${count.index}"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  os_profile {
    computer_name  = "oci-test-${count.index}"
    admin_username = "azure"
    admin_password = "Welcome-1234"
  }

  os_profile_linux_config {
    disable_password_authentication = false

    ssh_keys {
      key_data = "${var.ssh_public_key}"
      path     = "/home/azure/.ssh/authorized_keys"
    }
  }
}

And again, we will show public and private IPs for the VMs created.

output "azure_vm_private_ip" {
  value = ["${azurerm_network_interface.oci_testvm_nic.*.private_ip_address}"]
}

output "azure_vm_public_ip" {
  value = ["${azurerm_public_ip.oci_test_ip.*.ip_address}"]
}
Connecting

After running terraform with these configurations you should have public and private IPs of the VMs on OCI and on Azure. So now you can try to connect to these using ssh. For OCI you have to use username opc, for Azure it`s username azure.

OCI: ssh opc@xxx.xxx.xxx.xxx

Azure: ssh azure@xxx.xxx.xxx.xxx

Once logged in you can try to ping the VMs on the opposite cloud using both public and private IPs.

If this works, your are ready to do some real cloud use with a high-performace, enterprise-grade multi-cloud environment.


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.