Hardware Automation - BIOS settings

 As you know there are several hardware vendors outside and so by nature all use different ways to automate their kit. Luckily something I would call standard has made it the most vendor APIs – Redfish API.

For both parts I like to concentrate on HPE first, as this is one of the major players in the game.


As we are clearly looking with VMware glasses at the whole topic, I will reuse the tools I used to automate customers VMware VVD/Cloud Foundation deployments – vRealize Orchestrator.


In case of HPE there is a tool that they offers, which could solve baselining the just discussed problems, but also might not meet your requirements. I’m speaking about OneView. This tool from HPE clearly focus on HPE hardware and does not provide me a solution, which I can use across a heterogeneous hardware landscape. So it is not an option for me.

What I definitely like to use, are the APIs OneView is leveraging to automate the individual hardware components. One of these is the iLO RESTful API.

It is there since iLO 4 2.00 (found on Gen8 and Gen9 kit) and has enhanced with every new minor release. The latest evolution is iLO 5 with the broadest feature set so far.

I will reduce the scope to iLO 5 only for now, as this covers most current deployments, because it comes with all Gen10 kit HPE delivers to the customers at the moment. In a later article I might show, what I also developed to support iLO 4 on Gen8 and Gen9 hardware.


But kick it off. The iLO Edition you will need to follow my explanations in the content below and as well in the following parts is “Advanced”.


Looking at the BIOS settings it is always a good idea to create a golden host, which contains the BIOS configuration you like to distribute to all other nodes of the same model in your datacenter. I will not provide any guideline on how to find your optimal configuration in this article, but maybe in a future one. For the time in between I like to recommend the “VMware vSphere 6.5 Host Resources Deep Dive” by Frank Denneman and Niels Hagoort.

Assuming you have configured a host according to your needs, we first need to read the current configuration from this node via its iLO RESTful API. Clean it up and make it our golden configuration to use for every deployment of a new new server or distributing it across your existing estate.


As we need to run all iLO tasks in the same way, I have created a wrapper workflow, which takes the individual request and handles authentication for us.

var auth = RESTAuthenticationManager.createAuthentication("Basic",["Shared Session",user,password]);

var host = RESTHostManager.createHost(name);
host.url = url;
host.connectionTimeout = connectionTimeout;
host.operationTimeout = operationTimeout;
host.hostVerification = hostVerification;
host.authentication = auth;
// create temporary host
var restHost = RESTHostManager.createTransientHostFrom(host);
// prepare and execute main rest request
var request = restHost.createRequest(callmethod,call,callcontent);

if(callmethod != "GET")
  request.contentType = "application/json";
var response = request.execute();
if(response.statusCode != "200" && response.statusCode != "201")
  System.debug("Status code was "+response.statusCode);
  throw("return code of main request was not 200 or 201");
responseStatusCode = response.statusCode;
responseContent = response.contentAsString;


Read configuration

The following call is just passed through the wrapper workflow as GET request:


Read iLO BIOS config

Read iLO BIOS config

Clean up

As you might have seen while looking at the JSON output of the read call that the response contains a lot of personalized information of the server, like server name and serial number. This we don’t want to have on the new server, so we just delete the parts in the JSON structure. (The server name might be useful, but need to be individualized for every write action)

  "Attributes": {
    "AcpiHpet": "Enabled",
    "AcpiRootBridgePxm": "Enabled",
    "AcpiSlit": "Enabled",
    "AdjSecPrefetch": "Enabled",
    "AdminEmail": "",
    "AdminName": "",
    "AdminOtherInfo": "",
    "AdminPhone": "",
    "AdvancedMemProtection": "AdvancedEcc",
    "AsrStatus": "Enabled",
    "AsrTimeoutMinutes": "Timeout10",


Write back to a new host

The write process is pretty easy and as quick as the read process. We just need to run a POST request to the same URL containing the amended JSON response from the read call as payload. You can paste this configuration every time as workflow input or, like me, read it from a configuration element. In my case I’ve created a dedicated one, contains an entry for each server mode so I can read it based on the model returned by the API.

You might wonder if this configuration is active already – no. HPE is using an approach many network hardware vendors are using for their configuration. There are two types of configurations, the running configuration, the one that is currently active and the pending configuration (the one you have just changed). In our case the pending configuration automatically gets active once the server is reset.

When the server boots through POST, you will notice that a remote configuration takes place and the server might automatically restarts multiple times – this is the point where your configuration gets active.



Write iLO BIOS config

Write iLO BIOS config


Congrats you successfully distributed your custom configuration. This process might be adapted for other vendors. Stay tuned on updates for at least Dell in my pipeline.

For sure this only covers a single host. Feel free to wrap this into a loop or use this is part of a higher level workflow.


All code can be downloaded as vRO package from here:


(you need to set the iLO password in the Configuration Element section before running the workflows, otherwise they will fail)


Hardware Automation - Motivation

Automation in and around the SDDC mostly focuses on orchestrating virtual infrastructure for certain operational processes or leveraging it to deploy workloads/services as part of bigger blueprints.

From my day to day experience the most overseen part is the key component which enables SDDC in the first place – the underlying hardware.

Your are totally right in saying that hardware should be treated as cattle and not as pets, but mostly it lacks the level of automation exists for the virtual infrastructure in most companies.

API first claims, as done by many software vendors, have not made it the hardware producing companies so far. Many of them just have started on providing public APIs to their components.


So, what key advantages can you expect, if you invest mostly time into automating hardware:


  • Reduce hardware deployment times
  • Profiling customizable settings of your hardware, i.e. BIOS/UEFI Settings to ensure same reliability and performance across your estate, which gives you
  • Less situations of unpredictable behavior (who would have though about different BIOS settings, if you try to find an issue with your hypervisor)
  • Maintain vendor supplied hardware-firmware-driver combinations
  • Ensure that rolled out firmware and configuration settings comply with the standard you have engineered and tested


Looking back the last years many customers were facing issues in there environment because of such configuration drifts I just mentioned above. Standardization is key here. Some of you might argue that automation would make issues available everywhere, not just in single servers. That‘s basically correct, but is in the end a quality problem of your engineering:) Apply the same engineering and testing efforts to your hardware configurations and your will win.

A lot of companies roll the server hardware into their datacenter as they will arrive from the vendor and assume the vendor has chosen the right configuration for them. How the vendor should have known what your intended workload is? But even this methodology does not ensure that the servers are using the same firmware nor BIOS configuration. Trust me.


The following content should give you and idea what options you have to baseline your hardware. The content will be split up in two parts:


Build a hardened ESXi 6.5 image for HPE hardware

As part of the series for ESXi 6.5 this post should give you an idea of how to handle a ESXi image build in detail. No long introduction. Let’s start:


  1. Get you the latest ESXi 6.5 offline bundle available from
    VMware Patch Repository (MyVMware Login required)

  2. Get the required drivers and agents from HPEFirst check the recipe for the right firmware and driver combinations. This maybe requires you to update firmware on your boxes.
    HPE ProLiant server and option firmware and driver support recipe 

    Download the required drivers for the latest folder container esxi-650-* hierachie
    (alternatively you could connect this online, but have to build the image without a proper internet connection)

    The “esxi-650-devicedrivers” folder contains the right offline bundles for the drivers. Pick the ones you need for your hardware. If you have no idea how to find our what driver is required, please play around a little bit with the “esxcfg-*” commands on the ESXi Shell. List your network and storage adapters on an existing ESXi, best installed with vendor image, and note down the drivers are used.

    The “esxi-650-bundles” contains all additional agents and tooling. Just download the hpe-esxi6.5uX-bundle-* file as this does contain the  hpe-smx-provider CIM provider integration you need for proper hardware monitoring.Some of the drivers are double zipped. Just extract the first layer so you have the offline bundle. The second zip file should not contain any further *.zip file, but *.vib or a vib20 folder.

  3. Setup a PowerCLI 6.5 environment on a compatible Windows machine


  1. Load the VMware ESXi vanilla image
    Add-EsxSoftwareDepot .\ESXi650-201703002.zip
  2. Clone it for further modification
    PS> Get-EsxImageProfile | Select Name
    Select the standard image. For other patch offline depots you might see for Image Profiles. Pick the standard one without “s” behind the number.
    New-EsxImageProfile -CloneProfile ESXi-6.5.0-20170304101-standard -Name "ESXi-650-custom-hpe-hardened"  -Vendor "schoen computing"
    The Acceptance Level gets automatically inherited from the source image. You don’t need to explicitly specify the parameter.

  3. Remove packages
    Remove-EsxSoftwarePackage -ImageProfile ESXi-650-custom-hpe-hardened -SoftwarePackage xhci-xhci
    I removed these packages for my use case:
    This list does also contain drivers now get removed, but later added from the HPE depot in a newer version.

    Attention: Not all packages can be removed in the listed order as there dependencies between them. If the CLI does not allow you to remove a package as it is a required for another package, just remove the other first and try again.

  4. Export the stripped image
    Export-EsxImageProfile -ImageProfile ESXi-650-custom-hpe-hardened -ExportToBundle -FilePath .\ESXi-650-custom-hpe-hardened.zip
    This image now does contain only the left packages. I prefer now to close the PowerCLI session and load the exported image in a new session like in step 1.

  5. Add HPE offline depots
    Add-EsxSoftwareDepot .\<hpe driver/bundle>.zip
    Add all downloaded and extracted zips in the way above.

  6. Add the HPE packages to the image
    Add-EsxSoftwarePackage -ImageProfile ESXi-650-custom-hpe-hardened -SoftwarePackage <package name>
    The package names can be get from the offline depot zip files. These contain a folder for each package name in  vib20  folder. For my use case these packages were added:
  7. Export the finale image
    Export-EsxImageProfile -ImageProfile ESXi-650-custom-hpe-hardened -ExportToBundle -FilePath .\ESXi-650-custom-hpe-hardened.zip
    Export-EsxImageProfile -ImageProfile ESXi-650-custom-hpe-hardened -ExportToIso -FilePath .\ESXi-650-custom-hpe-hardened.iso

     Keep the ZIP store anywhere as you can use it for updating and extending the image.

ESXi security/hardening - ESXi image

First part of the series. As mentioned in the overview VMware provides a newly called “Security Configuration Guide”, but this don’t really faces the first part in hands-on, when elaborating a hardened hypervisor approach. All starts with the image we pick – it is the foundation of security. Just think of you are designing a bank depot for storing all the money. The holy grail – the money – is stored in the basement and the entrance of the building above the ground is highly secured by policemen staying at the doores and windows, but the basement has several holes for cooling, wastewater, etc, which are not secured anyhow. That’s not what we want to have with the hypervisor. So what are possible holes in our ESXi image?

  • ESXi Web Services
  • ESXi UI
  • CIM Server
  • OEM management agents
  • OEM tooling
  • serveral other services (just check firewall list on the Security configuration tab in ESXi)

These are services listening on the ESXi for providing data to vCenter or other management services. Some may be wanted, others not. Ok, fair enough, nice to know, but how this relates to the ESXi image? My main focus is to strip down the ESXi images as best as possible to guarantee functionality but don’t offer a high attack vector. So if we can remove unneeded services listening on any port, we can reduce the attack vector, so the attacker has not much possibilities to find any weakness in the system. But before removing anything, we need something we can remove things from. Picking the right base image is key. So what choices do we have for a base image:

  • VMware ESXi vanilla image This is offered only on the VMware website. It does not contain any relation to a specific hardware vendor. The driver set is integrated is capable to support most hardware on the HCL. It does not contain any OEM agents or services.

  • OEM ESXi image  For the most vendors this is also offered on the VMware website and is marked as a vendor specific image. This image was built based on the VMware vanilla image. Additional vendor specific agents, drivers and tools were added to it, to support all the hardware the vendors has certified to the hypervisor version it was built for, to remote manage the hardware by vendor management tools and run firmware updates for the underlying hardware on the hypervisor.

It should be now very clear what are candidates for a removal:

  • OEM management agents Don’t trust any of these agents. Many of them caused PSODs for my customers and offer often bad secured services to the outside. But be aware a lot of this agents are bundled with the CIM integrations provided from the vendor. CIM provider integrations are something we want to have in the image to not lose track of the hardware outages. The vendor integrations are mostly much more powerful compared to what VMware provides via generic interfaces.

  • Drivers in general (optional) Drivers, independent if they were provided by VMware of the OEM, are not really a security concern, as they are only used if a matching devices is present. I like to remove the unneeded ones anyway to keep the image as clean as possible. Most customers have a static bill of material for hardware and so it is very easy to pick the required drivers and strip out the left ones.

  • OEM tooling A lot of hardware vendors provide extra tooling for example for running firmware upgrades on the ESXi Shell or to read configuration out of the BMC boards or BIOS. This is nice, but really unwanted. Like I don’t want to provide capabilites to bridge the isolation between hypervisor and VMs and also don’t want to do the same between hypervisor and hardware.

  • Unwanted functionality This is the most complicated part of the hardening. To chose the right default functionality, is not build into the kernel, can be removed. Good candidates are GUIs, like HTML 5 GUI or the USB 3 drivers.

This should be all for now. There is a good question coming up how to remove all this drivers/agents/tools/functionality from my image? I prefer the VMware Image Builder CLI based on PowerCLI. With 6.5 you also have the chance to use a Web Client GUI for it as part of the Auto Deploy feature.

However you alter your image, please do yourself the favor and document it!

For getting an idea how specific steps in the reality look like, please check the example for HPE hardware linked below:

Build a hardened ESXi 6.5 image for HPE hardware    

ESXi 6.5 security/hardening

ESXi Hardening – loved and hated. What do have already in this space? VMware provides a hardening guide for all the latest versions of ESXi. Looks like with 6.5 there was a change in naming: “Security Configuration Guide“.  I think is pretty much well known, but in my day-to-day work this is only one part to build a hardened hypervisor. Especially if you work with large and security sensitive customers, you should put more brain work into this topic. As this is a growing topic, best is to set up a series for it:
  1. ESXi image
  2. SSL/TLS
  3. DMZ
  4. Monitoring
Stay tuned!