July 26, 2021
Through virtualization, network, storage, and computing resources are abstracted, making applications, services, and functions less dependent on physical hardware. IT administrators can provide applications, services and functions in their own environment (including operating systems, supporting software, network and storage resources). This environment is less susceptible to the problems of running other workloads on the same basic resources. Influence. Or they can share resources to reduce costs and improve overall utilization and performance.
From the early days of standard server-based virtualization (excluding IBM’s long-standing mainframe virtualization capabilities), administrators began to create virtual machines that contained everything needed to run workloads, including a complete copy of the operating system, all Support software and analog systems, for example as a network interface card. Some IT departments are turning to smaller methods, such as containerization, in which the workload is minimized to a packet that sits on top of virtualized resources and shares operating system functions.
Computing resource virtualization is performed in many ways, including hardware-based virtualization, hypervisor, and software-based virtualization. Therefore, administrators must choose the virtualization method carefully.
In this case, the system provides virtualization by allocating part of the CPU to different workloads. This is the main aspect of the IBM Power architecture, where the core part or the entire core can be divided to create a dedicated platform for the workload and dynamically allocate additional resources as needed. In this way, different workloads can be provided to a dedicated environment, and can be expanded and contracted according to the needs of the workload, and the bad behavior of any single workload will not affect other workloads.
Hardware virtualization also supports workloads that require higher availability (such as virtual private networks or antivirus engines) so that they have dedicated resources that other workloads cannot call. Intel and AMD do not pay much attention to complete hardware virtualization, but use Intel VirtualizaTIon Technology and AMD VirtualizaTIon respectively in hardware-assisted methods.
For hardware-assisted virtualization, the operating system and other software can complete the heavy work, but the software requires hardware functions to provide optimized virtualization while minimizing performance loss. API transfers calls from the application layer to the hardware, bypassing a large number of intrusive simulations and call processing in the code execution path.
Hardware-assisted virtualization is usually regarded as a hypervisor-based virtualization function combined with the underlying available CPU.
Virtualization based on hypervisor
Hypervisor-based virtualization is the most common form of virtualization in enterprise data centers. Type 1 hypervisors, also known as bare metal hypervisors, include VMware vSphere/ESXi, Microsoft Hyper-V and Linux KVM. By using the Type 1 hypervisor, virtualization takes effect before the operating system is actually started, creating a virtualized hardware platform where multiple instances of the host operating system can interact through the hypervisor layer.
Type 2 hypervisor, also known as managed hypervisor, is on the host operating system. Typically used on desktops to support guest operating systems, as opposed to server virtualization methods, examples of Type 2 hypervisors include Oracle VM VirtualBox, Parallels Desktop, and VMware Fusion.
Full virtualization means that the workload in the virtualized environment is unaware that it is not directly running on the physical platform. Paravirtualization takes a slightly different approach. Paravirtualization does not simulate the hardware environment—each workload runs its own isolated domain.
Products such as Xen support both full virtualization and paravirtualization. Oracle VM for x86 and IBM LPAR use a modified operating system that understands the paravirtualization layer and optimizes functions, such as privileged calls from workloads to hardware .
OS level virtualization
Operating system level virtualization (also known as containerization) has been favored by many people in the past few years. Containerization enables different workloads to share the same underlying resources in a mutually distrustful manner: any problems caused by one workload will not affect other workloads that share the same underlying resources. But this is not always the case. Early instances of Docker allowed privileged calls from a container to interfere with the physical environment, resulting in container damage due to the domino effect. Now, privileged calls to protected underlying resources are disabled by default.
Like hardware-assisted virtualization, with direct calls to the underlying operating system without any simulation, performance can be optimized. With the advent of Docker, workloads can be simply created to move from one platform to another while minimizing the amount of resources used to provide virtualization. Operating system-level virtualization is embedded in many cloud platforms. And get the support of most DevOps systems. Other platforms that provide operating system-level virtualization include Linux Containers and IBM Workload ParTITIons for AIX.
Cloud platforms tend to use hypervisor-based or operating system-level virtualization, or place operating system-level virtualization functions on hypervisor-based platforms.
The choice of virtualization type depends on the needs of supporting the guest OS, the number of workloads installed and managed, the overall performance required, and the overall cost. When you are trying to virtualize an entire platform with hundreds of operating systems, the licensing fee may be very high. high.