- 5.1 NFVI SW profile description.
- 5.2 NFVI SW profiles features and requirements.
- 5.3 NFVI HW profile description.
- 5.4 NFVI HW profiles features and requirements.
NFVI Software layer is composed of 2 layers, Figure 5-1:
- The virtualisation Infrastructure layer, which is based on hypervisor virtualisation technology or container-based virtualisation technology. Container virtualisation can be nested in hypervisor-based virtualisation
- The host OS layer
Figure 5-1: NFVI software layers.
For a host (compute node or physical server), the virtualization layer is an abstraction layer between hardware components (compute, storage and network resources) and virtual resources allocated to VNF-C, each VNF-C generally maps 1:1 against a single VM or a single container/pod. Figure 5-2 represents the virtual resources (virtual compute, virtual network and virtual storage) allocated to VNF-C and managed by the VIM.
Figure 5-2: NFVI- Virtual resources.
Depending on the requirements of VNFs, a VNFC will be deployed with a NFVI instance type and an appropriate compute flavour. A NFVI instance type is defined by a NFVI SW profile and a NFVI HW profile. A NFVI SW profile is a set of virtual resources with specific behaviour, capabilities and metrics. Figure 5-3 depicts a high level view of software profiles for Basic, Network Intensive and Compute intensive instances types.
Figure 5-3: NFVI software profiles.
The following sections detail the NFVI SW profile features per type of virtual resource. The list of these features will evolve over time.
Table 5-1 and Table 5-2 depict the features related to virtual compute.
.conf | Feature | Type | Description |
---|---|---|---|
nfvi.com.cfg.001 | Support of flavours | Flavours | Support of compute Flavours defined in Compute Flavour's catalogue. |
nfvi.com.cfg.002 | CPU partionning | Value | CPU dedicated to the host and CPU dedicated to VNFs |
nfvi.com.cfg.003 | CPU allocation ratio | Value | Number of virtual cores per physical core |
nfvi.com.cfg.004 | NUMA awareness | Yes/No | Support of NUMA at the virtualization layer |
nfvi.com.cfg.005 | CPU pinning capability | Yes/No | Binding of a process to a dedicated CPU |
nfvi.com.cfg.006 | Huge Pages | Yes/No | Ability to manage huge pages of memory |
Table 5-1: Virtual Compute features.
.conf | Feature | Type | Description |
---|---|---|---|
nfvi.com.acc.cfg.001 | Editor Note: To be worked on |
Table 5-2: Virtual Compute Acceleration features.
Table 5-3 and Table 5-4 depict the features related to virtual storage.
.conf | Feature | Type | Description |
---|---|---|---|
nfvi.stg.cfg.001 | Storage Types | Supported Storage types. | |
nfvi.stg.cfg.002 | Storage Block | Yes/No | |
nfvi.stg.cfg.003 | Storage Object | Yes/No | |
nfvi.stg.cfg.004 | Storage with replication | Yes/No | |
nfvi.stg.cfg.005 | Storage with encryption | Yes/No |
Table 5-3: Virtual Storage features.
.conf | Feature | Type | Description |
---|---|---|---|
nfvi.stg.acc.cfg.001 | Storage IOPS oriented | Yes/No | |
nfvi.stg.acc.cfg.002 | Storage capacity oriented | Yes/No |
Table 5-4: Virtual Storage Acceleration features.
Table 5-5 and Table 5-6 depict the features related to virtual networking.
.conf | Feature | Type | Description |
---|---|---|---|
nfvi.net.cfg.001 | vNIC interface | IO virtualisation | e.g. virtio1.1, i40evf (Intel driver for VF SR-IOV). |
nfvi.net.cfg.002 | Overlay protocol | Protocols | The overlay network encapsulation protocol needs to enable ECMP in the underlay to take advantage of the scale-out features of the network fabric. |
nfvi.net.cfg.003 | NAT | Yes/No | Support of Network Address Translation |
nfvi.net.cfg.004 | Security Groups | Yes/No | Set of rules managing incoming and outgoing network traffic |
nfvi.net.cfg.005 | SFC | Yes/No | Support of Service Function Chaining |
nfvi.net.cfg.006 | Traffic patterns symmetry | Yes/No | Traffic patterns should be optimal, in terms of packet flow. North-south traffic shall not be concentrated in specific elements in the architecture, making those critical choke-points, unless strictly necessary (i.e. when NAT 1:many is required). |
nfvi.net.cfg.007 | Horizontal scaling | Yes/No | The VNF cluster must be able to scale horizontally and to leverage technologies such as ECMP to enable scale-outs/scale-ins, privileging Active-Active HA models, even though this may require some level of application re-design to cope with the need of sharing state between VNF instances |
Table 5-5: Virtual Networking features.
.conf | Feature | Type | Description |
---|---|---|---|
nfvi.net.acc.cfg.001 | vSwitch optimization | Yes/No and SW Optimization | e.g. DPDK. |
nfvi.net.acc.cfg.002 | Support of HW offload | Yes/No | e.g. support of SR-IOV, SmartNic. |
nfvi.net.acc.cfg.003 | Crypto acceleration | Yes/No | |
nfvi.net.acc.cfg.004 | Crypto Acceleration Interface | Yes/No |
Table 5-6: Virtual Networking Acceleration features.
Comment: To be worked on.
This section will detail NFVI SW profiles and associated configurations for the 3 types of NFVI instances: Basic, Network intensive and Compute intensive.
Table 5-7 depicts the features and configurations related to virtual compute for the 3 types of reference NFVI instances.
.conf | Feature | Type | Basic | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.com.cfg.001 | Support of flavours | Yes/No | Y | Y | Y |
nfvi.com.cfg.002 | CPU partionning | value | |||
nfvi.com.cfg.003 | CPU allocation ratio | value | 1:4 | 1:1 | 1:1 |
nfvi.com.cfg.004 | NUMA awareness | Yes/No | N | Y | Y |
nfvi.com.cfg.005 | CPU pinning capability | Yes/No | N | Y | Y |
nfvi.com.cfg.006 | Huge Pages | Yes/No | N | Y | Y |
Table 5-7: Virtual Compute features and configuration for the 3 types of SW profiles.
Table 5-8 will gather virtual compute acceleration features. It will be filled over time.
.conf | Feature | Type | Basic | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.com.acc.cfg.001 | Editor Note: To be worked on |
Table 5-8: Virtual Compute Acceleration features.
Table 5-9 and Table 5-10 depict the features and configurations related to virtual storage for the 3 types of reference NFVI instances.
.conf | Feature | Type | Basic | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.stg.cfg.001 | Catalogue storage Types | Yes/No | Y | Y | Y |
nfvi.stg.cfg.002 | Storage Block | Yes/No | Y | Y | Y |
nfvi.stg.cfg.003 | Storage Object | Yes/No | Y | Y | Y |
nfvi.stg.cfg.004 | Storage with replication | Yes/No | N | Y | Y |
nfvi.stg.cfg.005 | Storage with encryption | Yes/No | N | N | Y |
Table 5-9: Virtual Storage features and configuration for the 3 types of SW profiles.
Table 5-10 depicts the features related to Virtual storage Acceleration
.conf | Feature | Type | Basic | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.stg.acc.cfg.001 | Storage IOPS oriented | Yes/No | N | Y | Y |
nfvi.stg.acc.cfg.002 | Storage capacity oriented | Yes/No | N | N | Y |
Table 5-10: Virtual Storage Acceleration features.
Table 5-11 and Table 5-12 depict the features and configurations related to virtual networking for the 3 types of reference NFVI instances.
.conf | Feature | Type | Basic | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.net.cfg.001 | vNIC interface | IO virtualisation | virtio1.1 | virtio1.1, i40evf (Intel driver for VF SR-IOV) | virtio1.1, i40evf (Intel driver for VF SR-IOV) |
nfvi.net.cfg.002 | Overlay protocol | Protocols | VXLAN, MPLSoUDP, GENEVE, other | VXLAN, MPLSoUDP, GENEVE, other | VXLAN, MPLSoUDP, GENEVE, other |
nfvi.net.cfg.003 | NAT | Yes/No | Y | Y | Y |
nfvi.net.cfg.004 | Security Group | Yes/No | Y | Y | Y |
nfvi.net.cfg.005 | SFC support | Yes/No | N | Y | Y |
nfvi.net.cfg.006 | Traffic patterns symmetry | Yes/No | Y | Y | Y |
nfvi.net.cfg.007 | Horizontal scaling | Yes/No | Y | Y | Y |
Table 5-11: Virtual Networking features and configuration for the 3 types of SW profiles.
.conf | Feature | Type | Basic | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.net.acc.cfg.001 | vSwitch optimization | YeS/No and SW Optimization | N | Y, DPDK | Y, DPDK |
nfvi.net.acc.cfg.002 | Support of HW offload | YeS/No | N | Y, support of SR-IOV and SmartNic | Y, support of SR-IOV and SmartNic |
nfvi.net.acc.cfg.003 | Crypto acceleration | Yes/No | N | Y | Y |
nfvi.net.acc.cfg.004 | Crypto Acceleration Interface | Yes/No | N | Y | Y |
Table 5-12: Virtual Networking Acceleration features.
The support of a variety of different workload types, each with different (sometimes conflicting) compute, storage and network characteristics, including accelerations and optimizations, drives the need to aggregate these characteristics as a hardware (host) profile and capabilities. A host profile is essentially a “personality” assigned to a compute host (physical server, also known as compute host, host, node or pServer). The host profiles and related capabilities consist of the intrinsic compute host capabilities (such as #CPUs (sockets), # of cores/CPU, RAM, local disks and their capacity, etc.), and capabilities enabled in hardware/BIOS, specialised hardware (such as accelerators), the underlay networking and storage.
This chapter defines a simplified host, host profile and related capabilities model associated with each of the different NFVI hardware profile and related capabilities; some of these profiles and capability parameters are shown in Figure 5-4.
Figure 5-4: NFVI hardware profiles and host associated capabilities.
The host profile model and configuration parameters (hereafter for simplicity simply "host profile") will be utilized in the Reference Architecture to define different hardware profiles. The host profiles can be considered to be the set of EPA-related (Enhanced Performance Awareness) configurations on NFVI resources.
Please note that in this chapter we shall not list all of the EPA-related configuration parameters.
A software profile (see Chapter 4 and Chapter 5) defines the characteristics of NFVI SW of which Virtual Machines (or Containers) will be deployed on. A many to many relationship exists between software profiles and host profiles. A given host can only be assigned a single host profile; a host profile can be assigned to multiple hosts. Different Cloud Service Providers (CSP) may utilize different naming standards for their host profiles.
The following naming convention is used in this document:
<host profile name>:: <”hp”><numeral host profile sequence #>
When a software profile is associated with a host profile, a qualified name can be used as specified below. For Example: for software profile “n” (network intensive) the above host profile name would be “n-hp1”.
<qualified host profile>:: <software profile><”-“><”hp”><numeral host profile sequence #>
Figure 5-5: Generic Hardware Profile, Software Flavour, Physical server relationship.
Figure 5-5 shows a simplistic depiction of the relationship between Hardware profile, Software Profile, Physical server, and virtual compute. In the diagram the resource pool, a logical construct, depicts all physical hosts that have been configured as per a given host profile; there is one resource pool for each hardware profile.
Please note resource pools are not OpenStack host aggregates.
The host profile and capabilities include:
- # of CPUs (sockets): is the #of CPUs installed on the physical server.
- # of cores/CPU: is the number of cores on each of the CPUs of the physical server.
- RAM (GB): is the amount of RAM installed on the pysical server.
- Local Disk Capacity: is the # of local disks and teh capacity of the disks installed on the physical server.
- HT (Hyper Threading; technically, SMT: Simultaneous Multithreading): Enabled on all physical servers. Gets 2 hyper threads per physical core. Always ON. Configured in the host (BIOS).
- NUMA (Non-Uniform Memory Access): Indicates that vCPU will be on a Socket that is aligned with the associated NIC card and memory. Important for performance optimized VNFs. Configured in the host (BIOS).
- SR-IOV (Single-Root Input/Output Virtualisation): Configure PCIe ports to support SR-IOV.
- smartNIC (aka Intelligent Server Adaptors): Accelerated virtual switch using smartNIC
- Cryptography Accelerators: such as AES-NI, SIMD/AVX, QAT.
- Security features: such as TRusted Platform Module (TPM).
The following model, Figure 5-6, depicts the essential characteristics of a host that are of interest in specifying a host profile. The host (physical server) is composed of compute, network and storage resources. The compute resources are composed of physical CPUs (aka CPU sockets or sockets) and memory (RAM). The network resources and storage resources are similarly modelled.
Figure 5-6: Generic model of a compute host for use in Host Profile configurations.
The hardware (host) profile properties are specified in the following sub-sections. The following diagram (Figure 5-7) pictorially represents a high-level abstraction of a physical server (host).
Figure 5-7: Generic model of a compute host for use in Host Profile configurations.
The configurations specified in here will be utilized in specifying the actual hardware profile configurations for each of the NFVI hardware profile types depicted in Figure 5-4.
Reference | Feature | Description | Basic Type | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.hw.cpu.cfg.001 | Number of CPU (Sockets) | This determines the number of CPU sockets exist within each platform | 2 | 2 | 2 |
nfvi.hw.cpu.cfg.002 | Number of Cores per CPU | This determines the number of cores needed per each CPU. | 20 | 20 | 20 |
nfvi.hw.cpu.cfg.003 | NUMA | N | Y | Y | |
nfvi.hw.cpu.cfg.004 | Hyperthreading (HT) | Y | Y | Y |
Table 5-13: Minimum Compute resources configuration parameters.
Reference | Feature | Description | Basic Type | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.hw.cac.cfg.001 | GPU | GPU | N | N | Y |
Table 5-14: Compute acceleration configuration specifications.
Reference | Feature | Description | Basic Type | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.hw.stg.hdd.cfg.001* | Local Storage HDD | ||||
nfvi.hw.stg.ssd.cfg.002* | Local Storage SSD | Recommended | Recommended | Recommended |
Table 5-15: Storage configuration specification.
*This specified local storage configurations including # and capacity of storage drives.
Reference | Feature | Description | Basic Type | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.hw.nic.cfg.001 | NIC Ports | Total Number of NIC Ports available in the platform | 4 | 4 | 4 |
nfvi.hw.nic.cfg.002 | Port Speed | Port speed specified in Gbps | 10 | 25 | 25 |
Table 5-16: Minimum NIC configuration specification.
Reference | Feature | Description | Basic Type | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.hw.pci.cfg.001 | PCIe slots | Number of PCIe slots available in the platform | 8 | 8 | 8 |
nfvi.hw.pci.cfg.002 | PCIe speed | Gen 3 | Gen 3 | Gen 3 | |
nfvi.hw.pci.cfg.003 | PCIe Lanes | 8 | 8 | 8 |
Table 5-17: PCIe configuration specification.
Reference | Feature | Description | Basic Type | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.hw.nac.cfg.001 | Cryptographic Acceleration | IPSec, Crypto | |||
nfvi.hw.nac.cfg.002 | SmartNIC | A SmartNIC that is used to offload vSwitch functionality to hardware | Maybe | Maybe | |
nfvi.hw.nac.cfg.003 | Compression |
Table 5-18: Network acceleration configuration specification.
Reference* | Feature | Description | Basic Type | Network Intensive | Compute Intensive |
---|---|---|---|---|---|
nfvi.hw.sec.cfg.001 | TPM | Platform must have Trusted Platform Module. | Y | Y | Y |
Table 5-19: Security configuration specification.