I have vm data and vmotion sharing 2x10GB links
the vmdata uses Uplink5 & 6 in active/active mode, load balancing based on physical nic load.
the vmotion-1 vmk uses uplink5 in active and uplink6 in standby. LB is route based on originating virtual port
the vmotion-2 vmk uses uplink6 in active and uplink5 in standby. LB is route based on originating virtual port
scenario:
vm on host 1, running iperf to a vm on host 2 , saturating 9.9Gb/s. no other traffic (pre-prod system)
a) network i/o control disabled: vmotion 4 VMs between the hosts, some of the vmotions fail with "a general system error occurred: Migration to host <192.168.90.103> (which is the vmotion port) failed with error Connection reset by peer (0xbad004b)"
b) network i/o control enabled: physical adapter shares: vm data high priority vmotion medium : same issue
c) network i/o control enabled: physical adapter shares: vm data high priority vmotion medium : host limit 8000Mbps : vmotions work correctly so far.
My question is - i'd like to max out the vm data to 10Gbs when vmotions are not happening, 1) why doesn't the physical adapter share setting work? 2) how can i do it without limiting my vm data with a host limit?
thanks