Hyper-V resource allocation check
Using PowerShell to perform a Hyper-V resource allocation check
This Powershell script determines the current resource allocation health of a Hyper-V server or nodes in a Hyper-V Cluster. The script will automatically scan the physical resources of each Hyper-V node and then compare that to the resources allocated to the Virtual Machines. It will then pass/fail based on the following criteria:
1:1 Memory – anything higher fails
*This is for static memory only, see below for best practice suggestions regarding dynamic memory
4:1 CPU – anything higher fails
20% free storage space – anything lower fails
These ratios can be edited in the script to suit your desired cutoff points.
Here is a brief video that shows how to run and interpret the script:
The following is an example of the scripts output from a two-node Hyper-V Cluster.
Download / Review Hyper-V resource allocation check
Download the script from the TechNet Script Center here:
Hyper-V resource allocation check to determine if resources are overprovisioned
Interpreting the results from the Hyper-V resource allocation check:
Is there a best Hyper-V practice ratio of vCPU to pCPU Cores?
Answer: This question has no answer. The only answer that comes close is “it depends”, and that isn’t much of an answer.
For the longest time 1:1 was recommended. You can still do this, but with modern processors and schedulers its just wasteful. One of the major benefits to virtualization in the first place is that CPUs can be used when needed, and shared when not needed.
What this means is, if you really want to know how many cores you need, then you need to have a solid understanding of what your actual workload is going to be. If unsure, you can go with a very conservative 4:1 (which is what the script is defaulted at) but in may cases 6:1 and even 12:1 will operate just fine.
Why? Because in many cases the threads sit idle almost all the time. As such, there is no real hard and fast rule to follow regarding the vCPU:pCPU ratio.
The following factors also have to be taken into consideration:
- Number of virtual processors
- Virtual machine reserve
- Percentage of total system resources
- Virtual machine Limit (percentage)
- Percentage of total system resources
- Relative weight
The bottom line: the Hyper-V scheduler is extremely efficient and unless your are running an abnormal CPU intensive workload, a higher ratio is often fine. If in doubt review the CPU usage for the same server over the duration of an entire month. In many cases you will find that it’s quite low. The script will fail anything higher than a 4:1 but that’s really on the conservative side. Feel free to adjust it.
Additional reading regarding Hyper CPU allocation:
Hyper-V VM Density, VP:LP Ratio, Cores and Threads…
Understanding Hyper-V CPU Usage (Physical and Virtual)
Hyper-V Virtual CPUs Explained
Hyper-V Performance, Scale & Architecture Changes
Is there a best Hyper-V practice ratio for memory?
Regarding static memory: Yes, 1:1
Regarding dynamic memory: “It depends”
There are three main points to consider regarding Dynamic memory:
- Startup – the RAM required for the VM to literally start (Ex 1024MB)
- Minimum – the lowest amount the VM can shrink down to when not busy (Ex 512MB)
- Maximum – the maximum amount the VM can grow to when very busy (Ex 2048MB)
In the above example a VM would turn on with 1GB of RAM, could go down to 512MB when not in use, and could increase to 2GB if busy.
Also factor in:
- Memory buffer
- Memory weight
Bottom line: You need to take into account the maximum dynamic RAM setting of each of your VMs. If the total maximum exceeds your available physical RAM then contention issues can occur and weight will be taken into account. Additionally, if you have RAM assigned out to meet those maximums you may be unable to start additional VMs. In my opinion you should be familiar with your expected workloads and if you are using dynamic you should weight VMs appropriately to ensure RAM usage is provisioned appropriately.
Don’t forget about cluster failovers
Don’t forget during your planning to account for a node failure. If each node in a two node cluster has 512GB of RAM and you have provisioned all 512GB out on a node and 128GB of RAM on the second node, the VMs will be unable to start on the opposing node in the event of a failure. You may be able to run more VMs this way, but they are not truly Highly Available when configured in this fashion. In a two node cluster with 512GB each you should be shooting for VM memory usage under 256GB on each node to accommodate a node loss. The script will simulate a node loss for you and will advise if your VMs are truly HA or not.
The script is not downloadable. The page at the link says its not published.
Do you have the PS1 file please? Can’t access it via the Microsoft site.
Jacob,
The resource allocation script has been moved over to Diag-V.
https://techthoughts.info/diag-v/
Just run it and choose the overallocation check.
Is this script available?
the last url does not work…
Yeah i had trouble getting it from the link as well.
Has anyone got this script it would really benefit my situation as bosses are asking why we cant build vms when we have a failover cluster 0f 7 nodes with 150gb left on each blade and yet it says overprovsioned.
#from an administrative 5.1.0+ PowerShell session
Install-Module -Name ‘Diag-V’ -Scope CurrentUser
#import the Diag-V module
Import-Module -Name “Diag-V”
#I want a complete health report of my Hyper-V deployment
#I want to know if my Hyper-V cluster can withstand a node failure
Test-HyperVAllocation
Looking for copy of this script to help asses with memeory and cpu utilization. Where can I download it?
Troy, check out: https://github.com/techthoughts2/Diag-V