I'm running Esxi 5.5 in a lab prior to deploying to a customer and I am trying to ensure I do the best solution for the customer.
My VMWare host has 4 NICs and they all attach to a switch. 2 of the NICs are used for virtual machines and two are used to connect to a Synology NAS. I separated the traffic on the switch by using Vlan 1 for the VMs and Vlan 2 for the storage. The Synology NAS has 4 NICs so 2 connect to Vlan 1 and 2 connect to Vlan 2.Inside the VMWare host I have 2 vSwitches. The first has 2 NICs attached and is the VM Network and the second has the other 2 NICs attached and services the VMKernel port for the NAS connection.
This all works fine. But I think I could use an alternative arrangement that would achieve the same result but possibly yields greater flexibility. My question is what are the pros and cons of both approaches.
The alternative would be to use a single switch in the VMWare host and connect it to all 4 Nics and tag all those 4 ports on the switch so they all act as trunks. Then I would define the VMKernel portgroup as now but place it place it on the only switch as VLAN 2.In this scenario all 4 NICs on the Synology NAS would be on VLAN 2.
It seems to be that this provides slightly more resiliency in that 3 of the 4 NICs could fail but the system would still operate. That is more a theoretical advantage than a practical one because 3 out of 4 NICs are unlikely to fail simultaneously. But it does offer the option of the storage network being able to use all 4 NICs when the load is heavy and the same goes for the VM guests. In the first scenario 2 NICs are always dedicated to each and so while they can't use more than 2 they will never have less than 2 NICs to use.
I'm interested in the feedback of people who have far more experience then me. I realize that in my lab it's all theoretical, but I do intend to deploy the eventual solution into a cluster of 3 VMHosts..