LandRover Defender

Server Virtualization

In the days of yore, business applications ran on dedicated Intel servers.  Eacks upon racks of servers churning away at processing a customer web sites or accounts receivable, banking and point of sale applications.

To handle catastrophic failure, you needed a generator and enough fuel to power the site for at least a week.

To handle regional risk such as power outage such as the 2003 Toronto blackout, a second location in Western Canada or Quebec would be implemented. Every server has a replicate which can be brought online via automatic network failover. Two separate data communications carriers would be used to carry data between sites.

One day, a lightbulb went off one day in some boffin’s head: “Most of out servers run at 5-10% o capacity. What if we configure a server to run multiple instances of other servers? We could build it in such away that the server would be brought online with minimal delay. Just think, We could get 10 physical servers to do the work of 100. Since they are connected by a secure Internet link, the backup site could be located anywhere in the world at the least possible cost.

As in many instances of “invention”, the concept is not new.  In the 1970’s. IBM developed the virtual machine operating system called VM. VM enabled the MVS and CMS operating systems and their associated applications to run on a single mainframe computer.

VMWare GSX hosted on Intel servers was launched in 2001. One physical intel server could be configured to run multiple instances of Linux or Windows Server operating system. The VM capability simplified setting up application development, test, release and production environments. Need 30 servers? You could spool up dedicated test server to handle functional tests for each application in the release cycle. Ditto for an application undergoing user acceptance testing at the same time. To handle test data, each server would be configured to access specific instances of test data.

There is still a problem of lowering cost to be solved.

IBM supports millions of servers around the globe for major organizations. IBM chargeS $25K per year to maintain a single physical server.

The next logical step is to move the server into a cloud environment where one physical server is shared by multiple virtual servers. The application portfolio likely maps into small, medium and large compute resources. IBM charges $15,500 per year for a small cloud server, $26,800 for a medium one and $28,000 per year for a physical server. A company can save big bucks by hosting the small application suite on the cloud. As well when you factor in the data centre operating costs it’s likely a company owned on premise server costs are substantially higher.

Another boffin wondered if my daily batch application could be virtualized, I could package it up into a container and deploy it on a cloud server as needed. That way I only need compute resources when I run my batch jobs without having to modify the application to become cloud compliant.

Enter Docker containers on stage left.

With Docker an organization can build, ship and run an application on any cloud infrastructure supporting Docker Containers.

A container is a form of application virtualization which contains a stripped-down version of Linux or Windows operating systems. They solve the problem of the not so simple task of running an application deployed in a developer’s desktop into production machines with minimal change.
For example, a Java web site uses Apache Tomcat version 9 web server.
Test Server A has Tomcat 7 loaded while Server B has Tomcat 9.  Since I’ve created a container configured with Tomcat Version 9, I can run it on Server A using Docker without having to change the Tomcat configuration for many years to come..

With Docker one can create an application container configured to run my end of day account reporting system along with several report writer applications.

I can now instruct Docker to run the container on my laptop to test it out and deploy it into production using Amazon Web Service’s Elastic BeanStalk.

Furthermore, I can use the Bean Stalk’s scheduler to launch an AWS application to run the Accounting application and reports in the correct order so that the outputs are available as inputs to my sales reporting system. Once the task is complete, all run time resources are removed.

The use of containers is not without complexities. For example, Container A implements web services used by container B and C. In order to deploy B and C I need to deploy A first. This can get more complicated as the number of dependencies skyrockets.

So what’s next?
Check out the article SnapIn Services