In a previous post I described how we are setting up remote offices for a customer with two-node vSAN clusters. I meant to get this post out right after that previous one, but things happened… Anyways, here’s how we automated those two-node vSAN clusters. Currently we have 7 of these racks ready with more to come. As these will be installed at distant locations we are extra keen on knowing that they are all configured as they should, and that the configuration is the same cross these multiple locations.
I had the pleasure of giving a talk about how to do monitoring of the vCenter Server during the VMUG Oslo meeting in December. The session was an extension of what I presented during the VMUG meetings in May and the vBrownbag session during VMworld Europe. The demos showed how we can get health status and metrics from a vCenter Server Appliance utilizing the new REST APIs shipped in 6.5 and 6.
This post is a (late) follow-up on a previous post I did about exploring the monitoring endpoints of the vCenter Server Appliance (VCSA), and an addition to the vSphere Performance blog series. Now we will add performance metrics and health status of the VCSA to our monitoring solution. We’ll utilize the REST APIs in vCenter and feed the data into our Influx database and visualize it in Grafana. In vCenter we have the Appliance Management page also refered to as the VAMI.
Lately I’ve been playing around with the Redfish based REST API in the HPE G2 Metered and Switched Power Distribution Units. Through the API you are able to pull some details about the PDU as well as different utilization data. Based on your PDUs capabilities you should also be able to control different outlets. My focus has been to pull some details about the PDUs, and to pull the load on the different segments.
This article will describe how you can disable the IPMI over LAN access on HPE iLO. The IPMI protocol can present a security vulnerability where the authentication process for IPMI requires a server to send a hash of a user password to the client before authentication. This is not a new vulnerability and since this is a part of the specification of the protocol there is no fix for it besides disabling it or accepting it.
This is a short post on how to extend the internal firmware repository in HPE OneView. The procedure is documented in the installation guide for 3.0, but if you are like me chances are that you don’t have that lying around so I thought I’d write up a short post on it for future reference. The process is pretty straight forward: Shut down the appliance Extend the hard disk to 275GB* Start the appliance The repository should be extended, if not do a manual restart
Last week I did a session about Performance monitoring at VMworld Europe in Barcelona. The session was part of the VMTN Techtalks with vBrownBag. The slides (without the video demos) and the script used in the demo is available at Github. The session was recorded and can be seen on Youtube. Thanks to all that attended the session and to those watched it live on Twitch or have seen the recording afterwards.
VMworld 2018 is over. As always I’m leaving with lots of great impressions and lots of content to digest and further explore over the coming weeks. I think it has been even clearer after this year that VMware is focusing on their Cloud strategy together with partners like AWS and IBM, that vSAN is the storage solution they want you to go forward with and that together with NSX this will be the base for the future.
This week I’ve been playing in our vCenter lab and tested the upgrade to 6.7 U1 and also changing the deployment type to vCenter with embedded PSC. I must say that the vCenter team has done a great job on the upgrade process over the last year. Both our migration from the Windows vCenter to the VCSA as well as the upgrade of a VCSA works well and there are lots of great documentation.
Recently we received lots of new hardware destined for a customer that has multiple locations world-wide. They need a robust server solution for their production environments locally. The environment is small in terms of number of VMs, but there is high demands on the environment and we need local hardware at the sites as the connections to these sites varies and they are not fast enough at all times. Lots of racks