Following up on my last post on Limiting disk i/o in vSphere we will now see how to automate this. First off we need some way to identify what limit a specific disk should have. You can do this in multiple ways, you could use an external source, you can use Tags or as we’ve done, Storage Policies. All our VMs and VMDKs have a Storage Policy to control the Storage it will be compliant with.
Recently HPE released version 4.1 of their management platform, OneView. We use OneView extensively in our environment and are always looking out for new functionality and features in the product. Version 4.1 comes with some new promising features. Secure remote troubleshooting with Remote Technician Reduced downtime for firmware and driver updates for HPE ProLiant servers Simplified cluster management and rolling updates Especially the ability to schedule firmware upgrades and rolling updates on a vSphere cluster sounds exiting and are very welcome.
This will be a post following on the previous one on how to control disk i/o in vSphere That post showed how you set IOPS limits either through the UI or with PowerCLI. Even though you set the limit on individual disks you need to work through the VM. And when you retrieve the disk limits for a VM you’ll get back limits on disks identified by the device key and not the label.
As a Service provider we need to have some way of limiting individual VMs from utilizing too much of our shared resources. When it comes to CPU and Memory this is rarely an issue as we try to not over-committing these resources, at least not the Memory. For CPU we closely monitor counters like CPU Ready and Latency to ensure that our VMs will have access to the resources they need.
This blog post will be building on a previous post where I built a small PXE server environment for ESXi installation. In this post we will enhance the PXE install with customized kickstart files specific for the hardware we want to install. There’s two new components to discuss here. The kickstart file (ks.cfg) it self and how to point to it during PXE boot. Let’s take a look at the current environment The tftp server root is located at /var/lib/tftpboot and my images is stored as directories under this directory.
I had the privilege of delivering 3 sessions at VMUG Norway this week in Oslo, Trondheim and Bergen. With the extremely nice weather in Norway this week in mind the attendance were great and as always the discussions were valuable. My session on vSphere Performance monitoring were the short version of the blog series I did about how we built our solution for doing performance monitoring of vSphere with InfluxDB and Grafana, and how we easily can customize with adding metrics and datasources.
I’ll be speaking at VMUG Norway’s meetings this May. As always there will be “three sessions in three cities”. Oslo, May 29th Trondheim, May 30th Bergen, May 31st The topic for my session will be how we have built our own vSphere Performance monitoring solution which I’ve also done a blog series about. The VMUG meetings are free, for more information check out https://www.vmug.com/norway. I hope you’re able to join!
At work I have done some monitoring projects which I’ve done many blog posts about. At home I have a small vSphere environment serving partially as a Lab but it also runs some services we use at home. Of course I do monitoring of this environment as well, and I use both InfluxDB and Grafana as we do at work. One of my VMs runs Plex Media Server and recently I moved my media library to a separate box running FreeNAS.
In my blog series on building a solution for monitoring vSphere Performance we have scripts for pulling VM and Host performance. I did some changes to those recently, mainly by adding some more metrics for instance for VDI hosts. This post will be about how we included our VSAN environments to the performance monitoring. This has gotten a great deal easier after the Get-VSANStat cmdlet came along in recent versions of PowerCLI.
In our environment we run ESXi primarily on HPE Proliant servers. We use OneView for managing the hardware it self (i.e. monitoring, firmware), but for provisioning ESXi to the servers we have been doing some of it manually and some of it with HPE Insight Control Server Provisioning (ICsp). When preparing for deployment of a new batch of servers we found that Proliant Gen10 servers is not supported by ICsp. Furthermore after an unofficial chat with a HPE employee it seems that it won’t be anytime soon either.