The schedule builder for VMworld Europe 2018 in Barcelona is finally live and sessions can be scheduled. For the first time I will have a session at VMworld, as one of the vBrownBag/VMTN community sessions, and I’m really excited about this. It is very cool that these community sessions are available in the schedule builder and can be scheduled as other sessions. My session is: Realtime Performance Monitoring - For FREE [VMTN5524E]
The last months we have had several issues with ESXi hosts going in a «Not responding» status. The VMs are still active and online in this scenario, but the ESXi cannot be managed. This also affets backup as it won’t be able to reach the VMs through the APIs. Previously we have normally just restarted the management agents on the host and it has been able to connect to vCenter and after this we have managed to migrate the VMs off the host.
For a long time, actually since we migrated to the VCSA in 6.5 last year, I’ve wanted to utilize the REST API in the appliance to have some monitoring of them. For several reasons I’ve had to put that on hold, one of them being that there seems to be something wrong with the back-end authentication calls. I get authentication errors on certain calls no matter which user I am logged in with (also the vsphere.
This month I was accepted as a vExpert for the first time! In total VMware announced 233 new vExperts this summer for their second half announcement in the program. The vExpert program is not a technical certification. VMware states: The judges selected people who were particularly engaged with their community and who had developed a substantial personal platform of influence in those communities. I am very proud and honored to be included in this community.
When trying to upgrade our lab vcenter from 6.5 to 6.7 this week we encountered a strange error. Our lab environement is running vSphere 6.5 on VCSA and we are running with an external PSC. So when starting the upgrade of the PSC I got an error early in the process, while connecting to the source VCSA. Error when deploying appliance I had remembered that I’ve seen some strange errors before if the root password of the appliance was expired.
After the release of the new and shiny version 4.1 of HPE OneView we have tried to upgrade one of our (smaller) OneView instances. The update process is usually quite straight forward and it gets better in every release. The upgrade to 4.0 from 3.x had some issues with certificate handling post-upgrade, but it was manageable. The upgrade from 4.0 to 4.1 should not be affected by the same so I had great hopes about a smooth upgrade.
Following up on my last post on Limiting disk i/o in vSphere we will now see how to automate this. First off we need some way to identify what limit a specific disk should have. You can do this in multiple ways, you could use an external source, you can use Tags or as we’ve done, Storage Policies. All our VMs and VMDKs have a Storage Policy to control the Storage it will be compliant with.
Recently HPE released version 4.1 of their management platform, OneView. We use OneView extensively in our environment and are always looking out for new functionality and features in the product. Version 4.1 comes with some new promising features. Secure remote troubleshooting with Remote Technician Reduced downtime for firmware and driver updates for HPE ProLiant servers Simplified cluster management and rolling updates Especially the ability to schedule firmware upgrades and rolling updates on a vSphere cluster sounds exiting and are very welcome.
This will be a post following on the previous one on how to control disk i/o in vSphere That post showed how you set IOPS limits either through the UI or with PowerCLI. Even though you set the limit on individual disks you need to work through the VM. And when you retrieve the disk limits for a VM you’ll get back limits on disks identified by the device key and not the label.
As a Service provider we need to have some way of limiting individual VMs from utilizing too much of our shared resources. When it comes to CPU and Memory this is rarely an issue as we try to not over-committing these resources, at least not the Memory. For CPU we closely monitor counters like CPU Ready and Latency to ensure that our VMs will have access to the resources they need.