Installing a non-Windows Secure Boot capable EFI Virtual Machine in Hyper-V

So you have downloaded an operating system installation disk (Ubuntu 16.04.2 used in this instructional) and noticed supports EFI, yet when you try to boot from the ISO message, you are greeted with a message stating that the machine does not detect it as a valid Secure Boot capable disk, as shown below it states that “The image’s hash and certificate are not allowed”

Luckily this is an easy fix, as it is simply secure boot that Ubuntu/Hyper-V are having an argument over the validity of the Secure Boot certificate.

Check out the video I have created showing you how to do this, alternatively keep reading below for instructions and more details



Turning off your VM, open up the settings page and navigate to the “Security” menu (Server 2016). As you can see in the image below, “Secure Boot” is enabled (checked) and the template is set to “Microsoft Windows”. What this effectively does is limit the Secure Boot function to working only with an appropriately signed Microsoft Windows boot system.

To fix this, there are two options, and it depends on the operating system you are trying to install. Preferably we want to keep the benefits of Secure Boot so the best option if it works for your operating system we want to simply change the template to “Microsoft UEFI Certificate Authority” this opens up the Secure Boot option to work with a greater range of appropriately signed boot systems, as against the Microsoft Windows one exclusively. The settings for this are shown below

Click Apply and this is hopefully now work, and you can check this by running the virtual machine.

Upon booting your virtual machine, you will now be presented with the boot menu from the disk, allowing you to continue on your way


If this change in of the CA template for Secure Boot does not work however you may need to disable secure boot entirely.

To achieve this go back to the “Security” menu simply uncheck it as per the image below, click Apply and it should now work.



Have Fun


Hyper-V Fix Time Sync issues

I know this has been done to death, but as this is my Blog, and the original idea for it was for me to put all the odds and sods of knowledge in one location so  I did not have to remember every little command, I am doing it again.

Hyper-V on Server 2008 and 2008 R2 has a known issue with time slipping slipping slipping into the future (sorry Steve Miller Band moment there) when using a Hyper-V based Primary Domain Controller (PDC). The first part of this is an east step, you turn OFF “Time Synchronisation” for the PDC, or whichever server takes care of your time syncing on the network (although I do it for all servers) on the Hyper-V host, this is done by selecting the Virtual Machine in the Hyper Visor, opening its properties, selecting integration services and unchecking “Time Synchronisation” as shown in the image below

Virtual Machine Settings - Integration Services - Turn off Time Sync
Virtual Machine Settings – Integration Services – Turn off Time Sync

Secondly to that, on the PDC you should set a known reliable time source, I normally select one from

To add this sever and set it to your PDC time server open an Administrative Command Prompt and enter the following commands

net stop w32time
w32tm /config /manualpeerlist:PEERS /syncfromflags:manual /reliable:yes /update
net start w32time

Where PEERS is the selected time server or time server pool.

This should update itself instantly, and keep itself updated

Hyper-V High Ping Latency

Had an interesting issue today, an insanely slow, brand new server….. or so I thought. First a bit of background on the client, they are fairly large, with over 400 client access devices to maintain, not including server, network equipment etc. to support this the client has 3 servers, one purchased a year and the oldest one thrown out, keeping all devices in warranty and with modern powerful equipment to keep things running, in addition to this there are two other servers that are replaced every three years, these are treated differently as these are speciality servers, and only do one task.

So, building a new server for a client, in this case a Dell R520 with Server 2008R2 running as a Hypervisor, nothing special in that, I do this exceedingly regularly so it has become more of a routine build for me. What got me with this one though was during testing I was getting insanely high ping latency, not only to the virtual machines from the network and vice versa, but also from the hypervisor to the machines and vice versa. Pings to other virtual machines on others server, on different LAN segments were all responding normally in <1MS

My first thought was there was something wrong with the virtual machines and that I had butchered something in the migration, but as they worked on other hypervisors without delays, that knocked that one on the head. Then I thought network location issue, but that does not make any sense due the fact that pinging from a hypervisor to guest does not go across a physical network, so it has to be the brand new server.

Ok, so what’s new about this server, well its got newer Processors, greater memory, faster HDD’s with larger capacity’s, basically it was more a case of what wasn’t different to the last server. Not going to go through the whole process of troubleshooting, but basically it was to do with the NIC’s, fine now what about them is it. After trial and error, and of course every techs most important tool, Google I came across the issue what is it…


THE ISSUE IS VMQ or Virtual Machine Queuing inside the Broadcom NIC drivers as shown below, disable this and the issue clears instantly

Advanced NIC Properties showing Virtual Machine Queue option
Advanced NIC Properties showing Virtual Machine Queue option


Pings and other indicators are now back down to <1ms which is what I expected to see in the first place.

Hardware effected by this was as follows

Server 2008R2 Enterprise
Broadcom Quad Port NIC
Broadcom Driver dated 4th of September 2012, as downloaded from the DELL site on the 31st of January 2013

Going to put this on my to check list in future


UPDATE 11th February 2013:

Dennis over at Flexecom has found the same thing in this posting (, posted on the 10th of December, wish I had found it before so I did not have to troubleshoot this myself, none the less he has more information on how VMQ’s are MEANT to work, interestingly although it is a different manufacturer, the NIC is the same, as is the driver version although the reported release date of the driver is different, so currently the problem seems to exist with BROADCOM NIC’s and specifically using driver revision Perhaps we could get Broadcom to turn this off by default, then if desired the server admin could turn it on.


%d bloggers like this: