Site icon Max Jahn Blog

Azure – Oracle Cloud (OCI) Interconnect Network Latency Shootout

What you should know: I am currently working for Orbit Cloud Solutions as Cloud Advisor, but any posts on this blog reflect my own views and opinions only.

The interconnect between Microsoft Azure and Oracle Cloud seems to be a hot topic lately. I often hear of the scenario to run an Oracle database on OCI and some applications on Azure, which is interesting to companies moving to the cloud. But one of the major concerns always seem to be that latency between Azure and OCI might be too high for their use cases. Since there are no performance SLAs or official testing results, it is always up to the user to either believe the numbers given or do a test on their own.

For the purpose of easier testing the network connectivity i created a couple of terraform scripts that will set up an environment on Azure and OCI with tooling useful for this purpose. In this post i will show you what i did and what i found out so far.

You can find the terraform source code at github. Use this to set up your own environment for testing the connectivity. Just keep in mind it is demo code and not intended for production setup.

Apart from the general disclaimer, i really need to emphasize that the results i will show here are one-shot statistics generated by me in a testing run. They are in no way official statistics by either Microsoft or Oracle.

Environment

For this test i created resources in OCI and Azure in the London (UK South) region. If you try this on your own, the region can easily be changed in my terraform scripts.

The image below shows the setup in OCI (red) and Azure (blue). In each cloud 2 virtual networks were created – 1 for the the general test and 1 for a VPN for comparison. Virtual machines are deployed for a client – server latency testing setup. Finally 3 types of connections between the clouds are set up:

 

Methodology

When testing the network in cloud infrastructure, one needs to understand that the providers do some optimizations to increase performance. This optimizations sometimes might not be applicable to all protocols. Therefore not only classic ICMP latency (“ping”) must be considered, but TCP and UDP latency as well.

For measuring ICMP i will stick to good old-fashioned ping, while for TCP and UDP qperf is used. The latter tool works with a client-server setup, so it needs to be installed on 2 machines at least.

To collect the statics over some period of time, cronjobs are installed on the VMs to execute ping and qperf against their designated destinations. All is written to log files, which in the end can easily be transformed into csv for further analysis.

With this setup, data is collected for several routes:

Results

The results show a consistent pattern of latencies for the different configurations.

The following tables show the results for the different configurations. (For the statistics friends: N actually would have been too low for calculation of variance – so take the standard deviation not too serious).

ICMP latency in ms measured with ping.
ConnectionAvgStd.devMinMax
Azure - Azure internal0,970,060,871,09
Azure - OCI via Interconnect2,910,442,343,75
Azure - OCI via Interconnect Fastpath1,790,211,652,34
Azure - OCI via public internet3,240,862,357,22
Azure - OCI via VPN5,841,044,647,02
OCI - OCI internal0,300,040,260,40

TCP latency in ms measured with qperf.
ConnectionAvgStd.devMinMax
Azure - Azure internal0,240,060,180,34
Azure - OCI via Interconnect1,260,141,081,49
Azure - OCI via Interconnect FastPath0,660,060,580,74
Azure - OCI via public internet1,400,181,111,84
Azure - OCI via VPN2,460,182,252,74
OCI - OCI internal0,080,010,070,11

 

UDP latency in ms measured with qperf.
ConnectionAvgStd.devMinMax
Azure - Azure internal0,230,060,170,33
Azure - OCI via Interconnect1,280,141,021,48
Azure - OCI via Interconnect FastPath0,630,060,570,71
Azure - OCI via public internet1,490,391,022,76
Azure - OCI via VPN2,530,092,392,61
OCI - OCI internal0,080,010,070,11

Conclusions

If thinking about a multi-cloud setup with strong interdependencies, always consider opting for an ExpressRoute setup on Azure that supports FastPath. All measurements show that there is a considerable performance gain (in terms of latency) by choosing this option. While this still cannot beat the performance within a single network, you still get numbers of a well setup local network.

The basic interconnect without FastPath will give you only minor performance gain over public internet, although the redundant dedicated lines of ExpressRoute and FastConnect will increase resiliency of your network, consistency of performance and eventually enhance security. Please note that the rather good performance of public internet connectivity might be caused due to Azure and OCI datacenters being geographically located very close together in most cases.

Using site-to-site VPN connections will lead to a considerable increase in latency. If secured connections between datacenters are a concern, consider evaluating a setup using FastConnect and ExpressRoute.


As i already mentioned above, these results are just a single observation from the London (UK South) region. For your specific requirements or region the results may vary, while i believe that the general observations will stay valid. Using the scripts i provide you can easily try out setting up an interconnect yourself and get some numbers yourself. For doing this, the credits in the trials for OCI and Azure will be sufficient.

Exit mobile version