From Browser Wars to Cloud Wars: Avoid Getting Locked In

How are cloud wars today similar to the browser wars back into 1995-2000? And how are developers affected by the fierce competition between cloud providers? Read more about the effects of vendor lock-in and the possible solutions to these challenges during software deployment and migration to new providers.

Browser wars and cloud wars

In the late 1990’s the war between Netscape Navigator and Internet Explorer determined web content developers to focus initially on either of the two browsers exclusively. As a result, most web pages would display the tagline „best viewed with <browser name>”. Later on, in the attempt to acquire new user segments, web pages would have two separate versions, one for each specific browser.

Nowadays we are in the middle of a new war between cloud vendors, which – similarly to the browser wars 20 years ago – is leading to the development of vendor specific services and subsequently requires additional work for multiple cloud deployments of the same software solution.

Software delivery, deployment and orchestration during cloud wars

We will take a closer look into the components which need rewriting in such situation, but before discussing clouds and providers, let’s see what software delivery is all about.

Software delivery is done through:

  • scripts delivered as such (copied and executed on the destination system)
  • applications in RPM, DEB, etc. format
  • containers (Dockerfile + packaged application)
  • virtual machines (QCOW2 image + packaged application, inserted through cloud-init or Ansible / Chef / Puppet)

But delivery is only the first step. Next comes the deployment. Depending on the options above, the deployment can be:

  • App installation using RPM
  • Container run
  • Virtual machine creation

Cloud providers also offer a graphical interface and here are two examples of graphical interface for virtual machine provided by:


… and OpenStack.

And yet, except for testing, neither the graphical interface, nor the command line should be used, especially for production systems. The reason is that the command sequence is not replicable. Here comes the orchestration

Each RPM file contains shell scripts for pre-install, post-install and post-uninstall. This is the “orchestration” of a package installation.

For Docker there is docker-compose.
For cloud we have specific templates. 2 examples: Cloud formation for AWS and HEAT for Openstack.

Which are the advantages of orchestration?

We have one or more text file(s) which can be:

  • Versioned
  • Stored in a repository (e.g. GIT)
  • Run through a review process
  • Run for testing and then on production, with consistent results.

Going back to the cloud discussion, let’s see how software delivery happens.

There are several possibilities:

  • Function as a Service – where we must write the application code directly into the interface offered by the provider
  • Platform as a Service – where we also need to create the packaging of the application and the containers
  • Infrastructure as a Service – where we must take care of the entire virtual infrastructure: application code, packaging and deployment of virtual resources (networking, virtual machines, etc.)

Why in the cloud?

There are mainly three reasons and several scenarios for switching from one cloud provider to another:

  1. Pricing. We move from hardware to public cloud for a smaller price, then we move to another public or private cloud because we realize that the estimated pricing structure does not match the reality of what we pay.
  2. Flexibility. We move from dedicated hardware to private or public cloud to enjoy more flexibility in resource allocation and even for auto-scaling.
  3. Performance. We move to a public cloud provider serving our geographical region, then we switch to another provider offering localized edge-computing or to a private cloud when we realize that latency is too high)

Vendor lock-in and related challenges

In any of these cases, provider lock-in issues occur. These issues can be related to:

  • The cost of migration. In order to switch to another provider or region, we must first create a similar resource, then move all the connections and only then we can stop the old resource. This generates a doubling of costs during the migration period. Of course, the migration is recommended during low-usage periods so if auto-scaling is used costs can be limited.
  • Provider specific services. With a different provider we may need to handle ourselves which was previously provided as out of the box.
  • Orchestration templates. If we count on vendor specific templates rewriting of templates may be needed when changing the provider.

We will further discuss each of them in more detail.

Locked into FaaS

FaaS is the obvious choice – minimum of cost and complexity – against an Amazon example:

  • We write our own code, if it is a script we write directly into the graphical interface, if it is Java we upload the jar.
  • The first 1 million requests/ month and the first 3.2 million seconds of processing are free.

It’s just that choosing FaaS depends on a few parameters.

Some negative:

  • AWS Lambda only supports a limited number of languages
  • AWS Lambda has certain limitations such as the use of ICMP protocol (you cannot ping a host)

Some positive:

Edge computing in general is needed for a large variety of applications such as:

  • 5G networks with ultra-low latency requirements. In the scenario where 2 cars need to communicate with each other to avoid an accident, latency is very important.
  • Surveillance cameras sending an extremely high volume of data. If we’re only interested in breaking and entering situations, we must do situation recognition early on (so that transfer speed is high) and then send the remaining volume of data – after the initial recognition – to another resource in the cloud in order to determine the details.

But back to blockers, regardless of how you deploy the software, it can use services offered by the provider, such as message queue and relational DB.

Most of these services are based on freely available software, so we can upload set them up ourselves, on virtual machines. Again, if the cluster version is required, it needs individual configuration. Using in the vendor’s service, we don’t need to do any installation, update or maintenance. However, when we switch to a different provider, we may find that an equivalent service is not provided.

In orchestration, using a provider’s specific template is a definite lock-in but can a third party come in as a savior?

Besides Cloud Formation (Amazon) and HEAT (OpenStack), Terraform is the usual choice for an orchestration 3rd party. It hasn’t reached version 1.0 yet, however it has a wide range of accepted providers, it can use external systems (DNSsimple, etc.) and it has a widely appreciated language (HCL) with large community support. However, it does not convert directly into provider specific orchestration, but uses the API provided, to run requests. The objects in a Terraform template are not identical among providers – such as AWS and OpenStack, for example – so we cannot move an orchestration template from one provider to another without changes but at least the HCL language remains the same.

Terraform – Virtual Machines

Openstack instance

resource “openstack_compute_instance_v2” “instance_1” {

AWS instance

resource “aws_instance” “instance_1″ {

Another blocker can be the fact that several quasi-identical templates need to be updated when a parameter changes. The solution in this case is the use of environment files. By defining a parameter in the environment, we can change how an instance is created without changing the orchestration template.

Orchestration parametrization

Similar layout Production and Test System, different flavors and other parameters

Openstack HEAT

environment_parameters: {

flavor: MY_VALUE,


type: OS::Nova::Server


flavor: { get_param: [ environment_parameters, flavor ] }


Amazon CloudFormation

“ParameterKey”:” InstanceTypeParameter “,



Type: AWS::EC2::Instance



Ref: InstanceTypeParameter

Similarly, by using conditions, we can change how an instance is created without changing the orchestration template. In this example the DB server is created only when flavor is on ‘prod’ (production).

Orchestration conditions

Openstack HEAT

environment_parameters: {



is_db: {equals: get_param: [ environment_parameters, flavor ],


type: OS::Nova::Server


flavor: { get_param: [ environment_parameters, flavor ] }

condition: is_db

Amazon AWS

“ParameterKey”:” Flavor “,





IsDB: !Equals [ !Ref Flavor, ]



Type: “AWS::EC2::Instance“

Condition: IsDB


So far we’ve seen parametrization in cloud-specific templates, but how about Terraform?

There’s a wrapper, Terragrunt, which ensures Terraform templates re-usability. The environment file changes, while the template on the right remains unchanged.


terraform {

source = “../main”


instance_type = “t2.micro“

variable “instance_type” {

description = “Flavor”


resource “aws_instance” “web” {

ami           = “${}”

instance_type = “${var.instance_type}“

Final recommendations

  • Go for the simplest variant of deployment, while aware of limitations

If there is a choice, the simplest option is desirable, because it allows us to focus on the software that needs to be created, instead of wasting time on containerization, orchestration, etc.

  • Be aware of potential migration issues when provider-specific services are used

When services such as “DB as a service”, MQ as a service’, etc. are used, they may not be available as we switch from one provider to another or to a different region of the same provider.

  • For IaaS orchestration, choose between 3rd party and provider-specific

For orchestration there are options beyond what is provided by the cloud provider. Their advantages and disadvantages largely depend on our specific needs. There’s no solution clearly outsmarting all others in any given scenario.

  • Make templates generic via parameters and conditions

The use of parameters and conditions may ensure the reusage of templates for several deployments.

Terraform conditions:

resource “whatever” “example” {

count = “${var.create_node}”


Module “inst1” {

Source = “/mymodules/whatever”

Create_node = true

Article based on the presentation delivered by Mihai Ionita – System Architect, at R Systems Tech.Talks – From Browser Wars to Cloud Wars – in Galati, June 19, 2019.

Need an expert opinion or help for you cloud development or cloud migration projects? Our cloud consultants are here to help and looking forward to hear from you at

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top

Apply to this job

Contact us

Send your CV