What you should know: I am currently working for Orbit Cloud Solutions as Cloud Advisor, but any posts on this blog reflect my own views and opinions only.

For me, databases are the boring but necessary parts in a system architecture. They enable modern, complex application landscapes having highly sophisticated data models, but require quite a lot of care to keep them fast and reliable. In some time the database will be a commodity just like the operating system or most server components – but we are not there yet. So there is a good case for a high performance, reliable and secure database that does not require much pampering from your local DBA. One approach for this is the Oracle Autonomous Database which provides a highly automated and scalable database environment in the cloud.

One of my pet topics is the setup of multi cloud infrastructure, focus in the past on integration of Microsoft Azure and Oracle Cloud Infrastructure (OCI). So i was wondering how to get a true hybrid setup with an application running on the Azure stack using an Autonomous Database on OCI up and running without having to route your traffic through public or non-trusted networks.

Azure Application using Autonomous Database over secured connection

This blog post will show the surprisingly easy setup of a connection from Azure to OCI database services using transit routing. After that i will use sql*plus to connect to the DB from Azure. I probably will explore the performance and latency of such a setup in a later post.

Automating the Setup with terraform

I admit that i really do not like guides showing you how to click in a GUI to get a task done. Since what i actually want in the real world is an automated setup, i always prefer guides showing how to achieve the task programatically. That´s why i always try to provide code with my guides to get the job done.

This time again i created some simple terraform and shell scripts on Github. Grab them there for your own experimenting.

There is also rather good Oracle documentation for private access to services using transit routing if you prefer the GUI clicking way.

Setting up the Interconnect

First the basic interconnect between Azure and OCI using ExpressRoute and FastConnect has to be created. I already showed this in a past post that you can take a look at to get the details. Basically we then get one VNet on Azure and one VCN on OCI that are connected via redundant dedicated connnections having low latency – ideal for setting up a multi cloud environment. This time i used the new Amsterdam region interconnect.

Part of this setup is changing the default route table for the VCN in OCI to have traffic destined to the CIDR of the VNet on Azure be sent to a Dynamic Routing Gateway (DRG). This DRG will then take care of sending the traffic via FastConnect to Azure. This route table will look like this:

Network EntityDestination
DRG10.1.0.0/16 (Azure VNet)
Internet Gateway0.0.0.0/0

Setting up Transit Routing

The interesting part now is to establish routing that will let a client on Azure access an Autonomous Database (or in fact any other OCI Service) without leaving the Azure and OCI networks. This will increase performance, security and reliability of this connection.


To achieve this, a feature called transit routing is used. What this basically does, is routing traffic heading to an OCI service from the DRG at the edge of your OCI VCN to the Service Gateway attached to that OCI VCN.

Connection from Azure to OCI Service Network via Transit Routing

To get this working, all that needs to be done is setting up correct routing in that VCN.

Service Gateway

The first thing we need is to add a Service Gateway that will allow clients in the OCI VCN to access OCI services in the current region. For transit routing, we need to tell OCI to route traffic for the Azure VNet we set up with the interconnect to the DRG. The Route Table attached to Service Gateway will look something like this:

Network EntityDestination
DRG10.1.0.0/16 (Azure VNet)

In terraform a rather generic solution will be used. First the services to be attached to the Service Gateway are looked up. Then the Route Table is created as shown above. And finally a Service Gateway with this Route Table attached is created.

data "oci_core_services" "transit_services" {
  filter {
    name   = "name"
    values = ["All .* Services In Oracle Services Network"]
    regex  = true

output "services" {
  value = ["${data.oci_core_services.transit_services.services}"]

resource "oci_core_route_table" "service_transit_route_table" {
  display_name   = "service-transit-route-table"
  compartment_id = "${var.oci_compartment_ocid}"
  vcn_id         = "${var.oci_vcn_id}"

  route_rules {
    network_entity_id = "${oci_core_drg.service_drg.id}"
    destination       = "${var.arm_vnet_cidr}"

resource "oci_core_service_gateway" "transit_service_gateway" {
  compartment_id = "${var.oci_compartment_ocid}"

  services {
    service_id = "${lookup(data.oci_core_services.transit_services.services[0], "id")}"
  vcn_id = "${var.oci_vcn_id}"
  display_name   = "transitServiceGateway"
  route_table_id = "${oci_core_route_table.service_transit_route_table.id}"
Dynamic Routing Gateway

The next thing we need, is a Dynamic Routing Gateway that will be handling traffic for the interconnect with Azure. This works perfectly fine for clients residing in the VCN the DRG is attached to. For traffic to move through the VCN to OCI services, some additional routing is required that will target the CIDR blocks used by OCI services. The route Table attached to Dynamic Routing Gateway (DRG) will then look like this:

Network EntityDestination TypeDestination
Service GatewaySERVICE_CIDR_BLOCKAll <region> Services in Oracle Services Network

In terraform the setup is straightforward too. Create the route table, the DRG and then attach the DRG to the VCN. The route table attached to the DRG in the latter step.

resource "oci_core_route_table" "service_gw_route_table" {
  display_name   = "service-gw-route-table"
  compartment_id = "${var.oci_compartment_ocid}"
  vcn_id         = "${var.oci_vcn_id}"

  route_rules {
    destination       = "${lookup(data.oci_core_services.transit_services.services[0], "cidr_block")}"
    destination_type  = "SERVICE_CIDR_BLOCK"
    network_entity_id = "${oci_core_service_gateway.transit_service_gateway.id}"

resource "oci_core_drg" "service_drg" {
  compartment_id = "${var.oci_compartment_ocid}"
  display_name   = "service-drg"

resource "oci_core_drg_attachment" "service_drg_attachment" {
  drg_id       = "${oci_core_drg.service_drg.id}"
  vcn_id         = "${var.oci_vcn_id}"
  display_name = "service-drg-attachment"
  route_table_id = "${oci_core_route_table.service_gw_route_table.id}"

When everything is set up correctly, you will have the DRG publish the public IPs of OCI services via BGP to Azure. There these public IPs will be added to the default routes for your VNet, thus directing all traffic for those services to the ExpressRoute/FastConnect interconnect and not public internet. You can check if this really was working by taking a look at the route table attached to the ExpressRoute connection.

Creating the Test Environment

Client Virtual Machines

As testing clients CentOS 7 images that support cloud-init on Azure are used. This is for automatically installing Oracle instant client packages to have everything in place for running SQL*Plus or some application connecting to the database. As well, the latest version of swingbench is downloaded, in case you feel like doing some performance & load testing.

Autonomous Database

The setup of the database is pretty straightforward and should cause no troubles. The only things you need to do is add some passwords for the database admin and the database wallet (which we will download later) to resources/set_env.sh. For generating the password, i usually use apg -m 12 -n 1 -M SCNL

The next thing you need to do, is come up with a database name on your own. In the past i had some name clashes due to the shared environments used, so not picking a unique name might cause some trouble later.

You might as well want to change some of the sizing, but for a first try i recommend to stick to the minimum (non-free) size.

export TF_VAR_oci_atp_admin_password="*****"
export ATP_WALLET_PASS="*****"

export TF_VAR_oci_atp_db_name="demodb"
export TF_VAR_oci_atp_db_displayname="demo-db"

export TF_VAR_oci_atp_db_workload="OLTP"
export TF_VAR_oci_atp_db_cores="1"
export TF_VAR_oci_atp_db_storage_tb="1"

Checking the Setup from SQL*Plus

For testing if our setup actually works, we will need to do some steps in the shell. As you can see, in about 2 minutes you got the first SQL query running.

Now the steps shown above in more detail.

Getting the credentials

For connecting to the database from you Azure VM you will need the wallet, a zip containing some configuration files and cryptographic keys. This can be done from OCI GUI, but as well using OCI CLI. If you used the terraform scripts i put on Github, one file containing some information from the setup process will be created and some information will already be set in environment variables . That makes it easy to fetch the wallet zip without need for interaction with the script get_wallet.sh i put in the scripts folder.

cd ../scripts

This will download the wallet and output some information about the next steps. You then will need to transfer and edit the wallet to your Azure test VM.

scp wallet.zip azure@xx.xx.xx.xx:wallet.zip
ssh azure@xx.xx.xx.xx
unzip wallet.zip

Edit the sqlnet.ora file and replace ?/network/admin with /home/azure. Then all you need to do is setting environment variable TNS_ADMIN to the path to tnsnames.ora.

export TNS_ADMIN=/home/azure

For persisting this, add this line to the .bashrc file as well.

Connecting to the Database

Now we got to the point of accessing the database from an Azure VM. As SQL*Plus should already be set up, you can simply login to your DB with the following line, depending on your password and database name:

sqlplus admin/YOUR_ADMIN_PASSWORD@demodb_tp

You should see the prompt waiting for your input instantly. Try some query to verify that there is a DB available.

select 1 from dual;

This is it. Congratulations, you got a running low latency, secure database connection for your Azure applications to use!

If you are as well tinkering around with multi-cloud setups and/or have some feedback, i’d be glad to hear from you!


Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.