Whats New in vSphere 6.0- vNUMA Enhancements

Before going into this post about vNUMA lets recall what NUMA is how it works. NUMA can be explained as follows:

Non-Uniform Memory Access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors.

In modern day physical server with two or more sockets (physical cpu) memory is distributed in a way that one slot (generally 8) of memory is local to one cpu and other slot of memory is local to the other cpu. A socket (cpu), its local memory and the bus connecting the two components is called a NUMA node. Both sockets are connected to the other sockets’ and thus allowing remote access of memory.


It is not mandatory that an additional socket in a system means it is NUMA supported. Two or more sockets can be connected to memory with no distinction between local and remote. This type of architecture where one or more sockets connected to the same RAM is called UMA (uniform memory access) system. Check your server specifications for whether or not your server supports NUMA.

For more information on NUMA I would recommend reading this wonderful blog by Frannk

vNUMA was introduced in vSphere version 5.0 to improve the performance of the CPU scheduling by exposing the underlying NUMA architecture of physical server to the VM. vNUMA is automatically enabled on a VM if it has more than 8 vCPU. You can also explicitly enable vNUMA on your VM even if you have less than 8 vCPU.

vNUMA is designed for modern OS’s that are NUMA aware and can make intelligent page management decisions based on locality. Prior to vSphere 6.0, vNUMA was not aware of how memory allocation works when a memory is hot added to a VM.

With the release of vSphere 6.0, there are also improvement in NUMA in terms of memory. Memory hot-add is now vNUMA aware. To explain this lets have a look on below example:

Note: This post was originally posted on http://plain-virt.blogspot.in/ by Wee Kiong Tan.

Let’s start with what happen in prior with vSphere 6 when a VM is hot-added with memory. We will take an example where a VM is configured with 3 GB of memory.


Graphics Thanks to http://plain-virt.blogspot.in/

As visible in above picture the VM memory is uniformly distributed on their physical counterpart i.e. the physical cpu present in the Esxi host because vNUMA is enabled on the VM.

Now suppose this VM have Hot-Add feature enabled and additional 3 GB of memory is hot added to this VM. Memory will now be allocated by placing to the first NUMA node follow by the next if enough memory is not available to schedule the threads on previous nodes.


Graphics Thanks to http://plain-virt.blogspot.in/

As you can see that after adding the additional memory, the memory allocation on physical component is no longer uniform.

In vSphere 6.0 VMware has addressed this issue and made memory Hot-Add more friendly to NUMA.


Graphics Thanks to http://plain-virt.blogspot.in/

You can see in the above picture now the memory distribution is even across the physical NUMA nodes.


About Manish Jha

Hi All I am Manish Kumar Jha aka Alex Hunt. I am currently working in Iono,Inc as Cloud Solutions Architect. I have around 8 Years of IT experience and have exposure on VMware vSphere, vCloud Director, RHEL and modern data center technologies like Cisco UCS and Cisco Nexus 1000v and NSX. If you find any post informational to you please press like and share it across social media and leave your comments if you want to discuss further on any post. Disclaimer: All the information on this website is published in good faith and for general information purpose only. I don’t make any warranties about the completeness, reliability and accuracy of this information. Any action you take upon the information you find on this blog is strictly at your own risk. The Views and opinions published on this blog are my own and not the opinions of my employer or any of the vendors of the product discussed.
This entry was posted in Memory_management, Vmware. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s