This week I’ve been playing in our vCenter lab and tested the upgrade to 6.7 U1 and also changing the deployment type to vCenter with embedded PSC. I must say that the vCenter team has done a great job on the upgrade process over the last year. Both our migration from the Windows vCenter to the VCSA as well as the upgrade of a VCSA works well and there are lots of great documentation.
Recently we received lots of new hardware destined for a customer that has multiple locations world-wide. They need a robust server solution for their production environments locally. The environment is small in terms of number of VMs, but there is high demands on the environment and we need local hardware at the sites as the connections to these sites varies and they are not fast enough at all times. Lots of racks
Recently there was a new release of Telegraf, a monitoring agent from the guys that built InfluxDB. This new version, 1.8.0, comes with a plugin for vSphere which I’m pretty excited about! Previously I’ve been testing Telegraf for monitoring some Linux VMs and also my InfluxDB servers and the agent works as expected and it’s as easy to use as the other products in the TICK stack from Influx. If you’ve followed my blog series about building a monitoring solution for vSphere and other infrastructure components you know that I’ve pulled metrics with PowerCLI scripts.
The schedule builder for VMworld Europe 2018 in Barcelona is finally live and sessions can be scheduled. For the first time I will have a session at VMworld, as one of the vBrownBag/VMTN community sessions, and I’m really excited about this. It is very cool that these community sessions are available in the schedule builder and can be scheduled as other sessions. My session is: **Realtime Performance Monitoring - For FREE [VMTN5524E]**
The last months we have had several issues with ESXi hosts going in a “Not responding” status. The VMs are still active and online in this scenario, but the ESXi cannot be managed. This also affets backup as it won’t be able to reach the VMs through the APIs. Previously we have normally just restarted the management agents on the host and it has been able to connect to vCenter and after this we have managed to migrate the VMs off the host.
For a long time, actually since we migrated to the VCSA in 6.5 last year, I’ve wanted to utilize the REST API in the appliance to have some monitoring of them. For several reasons I’ve had to put that on hold, one of them being that there seems to be something wrong with the back-end authentication calls. I get authentication errors on certain calls no matter which user I am logged in with (also the vsphere.
This month I was accepted as a vExpert for the first time! In total VMware announced 233 new vExperts this summer for their second half announcement in the program. The vExpert program is not a technical certification. VMware states: _The judges selected people who were particularly** engaged with their community and who had developed a substantial personal platform of influence in those communities**._ I am very proud and honored to be included in this community.
When trying to upgrade our lab vcenter from 6.5 to 6.7 this week we encountered a strange error. Our lab environement is running vSphere 6.5 on VCSA and we are running with an external PSC. So when starting the upgrade of the PSC I got an error early in the process, while connecting to the source VCSA. Error when deploying appliance I had remembered that I’ve seen some strange errors before if the root password of the appliance was expired.
After the release of the new and shiny version 4.1 of HPE OneView we have tried to upgrade one of our (smaller) OneView instances. The update process is usually quite straight forward and it gets better in every release. The upgrade to 4.0 from 3.x had some issues with certificate handling post-upgrade, but it was manageable. The upgrade from 4.0 to 4.1 should not be affected by the same so I had great hopes about a smooth upgrade.
Following up on my last post on Limiting disk i/o in vSphere we will now see how to automate this. First off we need some way to identify what limit a specific disk should have. You can do this in multiple ways, you could use an external source, you can use Tags or as we’ve done, Storage Policies. All our VMs and VMDKs have a Storage Policy to control the Storage it will be compliant with.