This weekend we migrated to vDS. Two vDS actually, one is dv-10gbe1 and one is dv-1gbeVMLan. The 10gbe is just storage and vmotion traffic off their own 10gb nic and a pair of Brocade switches. We use NFS for storage. the 1gbeVMLan is some trunk ports to a pair of Cisco 1gig switches with vlans for lan, dmz, voip, etc... all the good stuff that virtual machines may need access to.
Had an issue this weekend and not sure what caused it, but our vm5 host lost storage to the new NFS which was mounted by dns name vnxnfs. I'm not sure if its because the NFS was mounted by DNS or if the uplinks on that dv 10gig switch were set to load balance based on adapter load, or what. Anyway this vm was unresponsive. Most things timed out, and I even had a heck of a time vmotioning things. So I connected directly to this host vm5 and also to host vm4. I unregistered vcenter from vm5 and registered it on vm4. But when I brought vcenter up, I had no network connectivity. I changed the port group back to dv-Server Network and clicked connected, but it gave me a message something like invalid.portgroup.summary (or something to that affect).
Bottom line is, when vcenter was moved manually to another host and restarted, it had no network connectivity. I ended up moving it back to vm5 after that host was rebooted, and thank goodness I still have some lan connections on vm5 from extra nics becasue I used this host to migrate to vDS and I would place machines on it temporarily since it had both vDS and regular vSwitches.
So again bottom line....
vCenter - no connectivity when coming back up because it cant find the dv switch. So should I add a new network called "vcenter network" to the management interface of each vm host and just put vcenter on this? I found a blog that says you should have it set to static binding for the ports, but I checked dv-Server Network port group and it says Static binding.
Second question is more off topic but has to do with networking, is mounting nfs via dns a bad thing? I gave the EMC VNX5200 storage array 4 IP's and that way I figured any data store or host could just randomly pick an IP. Well I changed it to one IP address now and overnight it did not disconnect those machines.
Also for the 10 gig storage dvSwitch, it was set to route based on physical nic load. I changed it to Route based on source MAC hash because thats what it was when it was a regular vSwitch. So not sure if THAT fixed it, or remapping the datastore by IP address instead of DNS name fixed it. There are two active uplinks, one to each brocade switch. The switches are not stackable but they are connected together, and the storage array has a fail safe network so its only using active/standby. For our load though 10gig is still an upgrade as we also have an EMC NX4 on a 1 gig NFS storage network on cisco switches, which we did not migrate to vDS... its still standard vSwitches across 5 vm hosts.
Thank you, sorry if its long winded, a lot happened at once.