Hi All-
I am building out a new 4 host vSAN cluster from scratch. Each host has:
- Two 10G Copper NICs - will be teamed for production vm traffice
- Two 1G Copper NICs - will be used for management and redundant management (seperate switches)
- Four 10G Fiber NICs - 2 will be teamed for vMotion, 2 will be used for vSAN (acitve/standby)
My question concerns the Distributed Switch. How many uplink ports should I specifiy when creating the switch (default is 4)? Should I used 8 - the total number of NICs per host? If so, how to divide them up based on above design?
I have attempted once already to add the hosts to my newly created dvSwitch with not great results. In the Add and Manage Hosts wizard of the dvSwitch, I added all 4 new hosts and chose to "Manage Physical Adapters" and "Manage VMKernel Adapters". I then click "Assign Uplink" for all 8 NICs and map them to the physical NICs in my hosts (which I have renamed based on their purpose). I then apply to all hosts. I then assign the vmk0 that currently resides on vSwitch0 to the new Distributed Port Group and apply to all. What happening is once I click finish, I'm losing management connectivity to all but one host. I'm not understanding what I'm doing wrong.
Really confused on how to set this switch up, assign NICs as noted above, etc. Any help is appreciated. I've attached a PIC if that helps. Thanks.