Have you ever wondered what is the maximum clock speed a central processing unit (CPU) can achieve? Considering the technological advancements there should not be any limit but – surprisingly – the speed of commercially available CPUs have been limited around 4GHz for a long time.
This is not because the technology is not advanced enough to achieve higher clock speed, it is rather the heat generated by these high clock speeds which limits the CPU speed. As it turns out we do not have efficient cooling systems to make possible the use of CPUs with higher clock speeds.
The limited CPU speed restricts the utilization of other hardware components like the hard disk and memory. CPU density, the numbers of cores that can be packaged in the same physical space (say 5mm x 5mm), is also worth mentioning. Today one can have 24 cores CPU filling this tiny space and if Moore’s law in correct, then this density will double every 2 years, which means in 2 years this number will increase to 48 and it will only keep on increasing.
The best way to make use of this incredible computational power is virtualization. Hence, virtualization in on the rise and organisations all over the world are trying to have a better understanding and make the most of this technological development.
So what is virtualization?
Virtualization is the process of using software to allow hardware to run different operating systems on the same hardware. These operating systems can then be used to run different applications which do not interfere with each other. They run in isolation and in the illusion that they are running on separate hardware machines.
Basically, in the same system we have multiple CPUs and each one is used to operate one operating system so that other hardware components can be utilized efficiently.
What are the benefits and (some of) the challenges of virtualization?
Separate environments on the same system provide a lot of benefits for the developers.
Virtual machines can be saved on disks and then run on different hardware. They will inherit 100% of their previous state which saves the time required to set up the environment while moving from one system to another.
Earlier practice was to use one application to start multiple services which used to take a lot of time. Microservices mean running one service per application. Use of microservices on virtual machines helps in finding bugs easily. They also allow for easier upgrades and rollbacks in an application.
One of the best features of microservices is scalability. Because they are designed to run in isolation (within containers), they can also be run multiple times (meaning in multiple instances; and multiple may be 2, 2000, or more) according to the business computation needs, without the fear that they would break each other’s legs, or that some of them will eventually crash.
One would ideally not want these microservices to interact with each other unless it is intended and this is where containers help us. Containers are used to isolate the contained applications
- from the inside, they are similar to a (virtual) machine but they are not, they only provide an isolated environment for the contained application to run within
- they make use of the (virtual) hardware and the operating system of the host (virtual) machine, thus starting them do not require a complete boot process.
The big benefit of the containers is that they can have a storage form (called image), that packages everything an application (or a set of) needs to run, and that can be started at any time (and as many times as needed) with the same “time 0” state.
Even though the concept of virtual machines has been around for a long time, it is only recently that the use has picked up speed. Hence, a proper understanding needs to be developed for effective usage of virtualization. And of course, like any new technology it is expensive to implement.
To find out more about our experience in virtualization projects, you may read here.
Co-authors: Florin Ganea (System Architect) & Saurabh Jha (Corporate Communication Specialist)