Hi,
I don't know how to test whether does it work or not because my ESX always presents with vkernel port MAC address (i have 2 physical cards). How to check which card is used ?
Hi,
I don't know how to test whether does it work or not because my ESX always presents with vkernel port MAC address (i have 2 physical cards). How to check which card is used ?
hi ,
I have a classic case where in the vDS has multiple dvportgroups with individual physical nics as uplinks.
1/2 case - enabling promiscuous mode on the dvportgroup is causing the flooding of packets to other ports on the dvportgroup , even though the source is sending packet in a back to back connected setup. The network performance is just unbelievable low! i.e. 10+ Mpps payload gives a throughtput of 600K on the vmnic.
2/2 case - disabling promiscuous mode on the dvportgroup causes 100% packet loss. no network packt flows until this security setting is enabled.
Could someone explain as to what might be causing this?
Thanks,
Suma
Hi,
I have some problem with virtual network adapter performance [vmxnet3 - 1.4.2.0] on ESXi 5.5 U3b, guest OS RHEL 7.3.
I'm sending data at a rate of 800000 packets per second (TCP stateless, frame 1518) and i have up to 50% packet lost on my interfaces.
Software interrupt (ksoftirqd) are processed on one kernel of with cpu utilization nearly 30-40%, but "pkts rx out of buf" value isn't equal to zero. What could the problem be?
-bash-4.2# ethtool -S eth1
......
Rx Queue#: 0
LRO pkts rx: 0
LRO byte rx: 0
ucast pkts rx: 164103192
ucast bytes rx: 248452226872
mcast pkts rx: 0
mcast bytes rx: 0
bcast pkts rx: 0
bcast bytes rx: 0
pkts rx out of buf: 67073153
pkts rx err: 0
drv dropped rx total: 0
err: 0
fcs: 0
rx buf alloc fail: 0
tx timeout count: 0
-bash-4.2# ethtool -g eth1
Ring parameters for eth1:
Pre-set maximums:
RX: 4096
RX Mini: 0
RX Jumbo: 2048
TX: 4096
Current hardware settings:
RX: 4096
RX Mini: 0
RX Jumbo: 256
TX: 4096
Hi, I m trying to find some information on how to isolate guests in a multi customer environment in EXSi, google did not help maybe I am using wrong terminology in search . Or point m e in to some resource that I can read
Thanks and Best Regards
I have a 8 node cluster . The issue is what i am facing is we have 6 Vms on one particular VLAN 103 and if i migrate any of the Vms in VLAN 103 to this particular host ESXI - 06 , the VM looses network connectivity.
It is not reachable and even after a vmconsole if i try to ping the gateway it shows request time out. If i go to the networking tab in Monitor for this particular HOST ESXI - 06 - physical N/w adaptors i can see in the observed IP range the VLAN 103. Can any help me on how to actually start with this issue.
If the VMs for VLAN 103 is migrated to any of the other hosts in the cluster , it just works perfectly.
Using Standard Switch config.
As I study for my VCP6-DCV I'm trying to get a better understanding of dynamic vs. ephemeral port binding for dvPortGroups. After doing some research (see below) I need to conform some things.
1) Because ephemeral ports act like ports on standard port groups and VMware refers to this as "no binding" what VMware really means is that port binding is in effect delegated to the ESXi hosts.
2) Therefore, the difference between dynamic vs. ephemeral is that in the case of dynamic ports the vdSwitch does the actual port binding (at VM power-on), but in the case of ephemeral ports the host is doing the port binding.
3) Does this mean that ephemeral ports don't count against the "Ports per distributed switch" and "Distributed virtual network switch ports per vCenter" configuration maximums?
[1] vNetwork Distributed PortGroup (dvPortGroup) configuration (1010593) (http://kb.vmware.com/kb/1010593)
[2] Configuring vNetwork Distributed Switch for VMware View (http://myvirtualcloud.net/configuring-vnetwork-distributed-switch-for-vmware-view/)
[3] Static, Dynamic and Ephemeral Binding in Distributed Switches (http://www.vmskills.com/2010/10/static-dynamic-and-ephemeral-binding-in.html)
[4] ESXi/ESX Configuration Maximums (1003497) (http://kb.vmware.com/kb/1003497)
Hi everyone, recently i had to add three VM's to our ESXi and I'm trying to build this the best possible way in regard to load balancing.
Initial configuration was same as on the screenshot except those 3 newly added VM's. So there were vSwitch0 - with Management network only and vSwitch1 - with actual VM's (hosts), and instead of adding those three VM's to vSwitch1 (where all other hosts are residing) i thought that i better add them to vSwitch0 to make load balancing "better".
There are total two physical network cards connected to the server, one for vSwitch0 and another for vSwitch1.
Is it a good idea guys to have VM's (hosts) on the same network card which is used to manage ESXi (management network)?
The thing is i inherited all of this (as most of us) and dont want to make it worse then it was and in the same time to build new according to the best practices.Thanks.
Hi,
I setup a distributed switch v6.5 when I activate the health check I get a warning for teaming and failover.
The distributed port groups are all set to: Route based on physical NIC load. And if I change these this does not affect the warning message.
The ESXi is connected via two 10GB NICs to two different physical switches. (Because of some reasons there is no LACP configured.)
The management of the ESXi runs over two 1GB NICs on a vSwitch.
Why is there this warning and what may I do? I already did a lot of tests and changes on the VMware and on the hardware side.
Regards Wolfgang
I have esxi & vcenter 5.5 u3, I want to migrate to evc mode and also replace two older hardware hosts. Can I create a second cluster, turn on EVC mode and also use the same VDS? I want to migrate everything to the new cluster and then get rid of cluster 1
Hi,
I have a couple of hosts and a vDS with two uplinks each in a LAG using Route based on IP Hash, that's working fine.
I've been searching for a best practise way of migrating from LAGs (and Route based on IP Hash) to just plain physical ports (vlan trunks) and Load Based Teaming (phy NIC load).
But to my surprise i could not find a single article or blogpost discussing this procedure. Perhaps it's just that easy and obvious that i should know it already
My hosts are on 6.0 and so is my vDS.
Kind Regards
Magnus
So we have a situation affecting VMs in 1 port group on a distributed vSwitch with 65 port groups defined. Randomly upon reboot the network adapter for some VMs in this port group will end up assigned a port that isn't in this port group but rather belongs to another port group. In other words this port group contains ports 10 - 137 but recently we've had servers end up assigned ports above 4000 which are in a completely different port group. Has anyone out there experienced this type of behavior?
vCenter 6.0.0 Build 5318200
ESXi 6.0.0 Build 5572656
Hello,
Need help configuring specific adapters for vmkernel services.I'm running vcsa 6.5 and the servers are on 6.0.
I have three Dell servers. They each have a 4-port nic. I was hoping to have one physical port used for management and one for vmotion, and one for vsan.
It appears that all 4 ports are configured together in one pg. This physical port is untagged at the switch. All services and vsan ping tests work and connectivity is in place.
vmk0 - mgmt-pg
vmk1 - vsan-pg
vmk2 - vmotion-pg
Two questions:
1) How do I give vsan and vmotion their own dedicated phyiscal port? Do I need to create new uplink groups with one physical adapter/link each?
vmk0 - mgmt-pg = pnic #1
vmk1 - vsan-pg = pnic #2
vmk2 - vmotion-pg = pnic #3
2) Can I remove uplinks from an existing pg? I don't see the option.
Thank you,
Jay
Hi Guys,
I am an ex VCP 5.0 and 5.5 but did not renew the certification after renewing once on the delta exam.
The last time I setup the entire vSphere infrastructure was back in my ex company in June 2012. I've used dell equallogic PS4110XV 10GB and dell poweredge servers with 10GB sfp+ adapters. I remember that the original esxi installer cd download from vmware does not have the 10GB sfp+ adapters. And I have to download the dell customized vsphere 5.1 installer to make it work.
I will be joining a new company next time and they have 3 brand new HP DL380G9 but not configured for virtualization. Meaning that I will need to purchase extra processor and upgrade the ram to 64GB plus adding the HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter into each of the 3 servers. Will be using the essential plus license for these hosts.
My question is whether do I also need to download and use the HP customized vsphere 6.5 installer to get these HP servers work? Does anyone have experience using the DL380G9 with HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter on vSphere 6.5? I am particular worried about the HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter driver on the vsphere 6.5 installer.
As a new joiner I do not want to propose something which does not work causing the new company to lose confidence in my abilities. As I've said, my ex company uses only dell poweredge servers and not HP servers.
Plus I do not have much experience with the HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter running on iscsi.
I will also be proposing using 2 x dell networking x4012 for the 10gb sfp+ connections. I understand that the x4012 does not have stacking features compared to the 8024F which I've used previously in stacking mode. But the budget does not allow so we have to make do with the x4012. If i do not stack the x4012 switches, will it work? Does any knows of the x4012 have jumbo packet MTU 9000?
Thanks and I hope that someone can help me on these questions.
what is the purpose of Uplink in distributed switch? as far as i know its used to connect vmnic to distributed switch, is that correct?
Good day all,
I am trying to max out the performance of the NAS but at the same time I have to be very cautious as I have to deploy ESXI in 3 datacenters and the links between them are just 1Gb. All that while keeping HA happy too! Ideally I would like to use the directly connected NASes on 10Gb SFP+ ports for the VMs on their datacenter unless something happens and I need to migrate the vms to another host temporarily. Then the NASes would be also connected to the other esxi hosts using the 1Gb link that they'll use while the catastrophe is being sorted.
The 3 esxi (essentials plus 6.5) hosts have 4x 1Gb RJ45 NIC and 1 SFP+ 10Gb card, 2 NICs are for management and the other 2 for the VMs while the SFP+ card is directly connected to their respective NAS. The NASes have 2x 1Gb RJ45 NIC and 1 SFP+ 10Gb, the 2 NICs are for management (barely used for that really) and NFS for share storage and the SFP+ are directly connected to the esxi hosts.
The connection is as follows, NAS1 10Gb SFP+ port ----> ESXI1 10Gb port (on dedicated subnet no switching no nothing just P2P)
NAS2 10Gb SFP+ port ----> ESXI2 10Gb port
NAS3 10Gb SFP+ port ----> ESXI3 10Gb port
Also the NASes are connected to all the other ESXI hosts using the server subnet NICs (2x 1Gb) for HA.
Taking this IP Address table example:
Server Network Peer 2 Peer Network
ESXI1 192.168.40.20 10.0.0.1 ---> 10.0.0.4
ESXI2 192.168.40.21 10.0.0.2 ---> 10.0.0.5
ESXI3 192.168.40.22 10.0.0.3 ---> 10.0.0.6
NAS1 192.168.40.23 10.0.0.4 ---> 10.0.0.1
NAS2 192.168.40.24 10.0.0.5 ---> 10.0.0.2
NAS3 192.168.40.25 10.0.0.6 ---> 10.0.0.3
So, with the connections between ESXI1 and ESXI2 would be as follows
ESXI1 to NAS1 dest. 10.0.0.4
ESXI1 to NAS2 dest. 192.168.40.24
ESXI1 to NAS3 dest. 192.168.40.25
ESXI2 to NAS1 dest. 192.168.40.23
ESXI2 to NAS2 dest. 10.0.0.5
ESXI2 to NAS3 dest 192.168.40.25
You get the picture for esxi 3 I assume.
The problem, when adding the storages, esxi seems to get the UID based on IP address. If I try to mount datastore to additional hosts it fails like a pro as it is trying to connect it to the other host using the unroutable subnet. Then If I try to add the datastore using their "public ip" using the same name, it complains that the datastore with different backing up system is already added.
So I end up with loads of datastores and the "on-the-fly" migration to another server doesn't work. Migrating them works but of course it takes time. As I populate the hosts with VMs it will just take more and more time.
I am stuck here, don't know what to do and budget is extremely tight. vSan is out of the question (hosts have 2 cpus each). Isn't there a command to specify the UID or tell esxi to bridge the internal 10.0.0.x network or some sort of static forwarding or natting? I can't think of anything but there are many ways to fool the system but then again... That can break it too.
Open to suggestions,
Thanks
PS: quick network topology for you amusement:
Hello everybody,
I have six ESXi server in a cluster. All are on v65 including the vCenter. Everything is up to date.
Two months ago I started with a distributed Switch on server1. This worked productive pretty fine all the time. Now I replaced server2 with new hardware and then I joined server2 as second one to the distributed switch.
server2 is still in maintenance mode and was not yet productive. I rebooted server2 and then I got an error message on server1: Teaming configuration in vSphere Distributed Switch on server1 does not match the physical switch configuration in the Datacenter. Detail: No loadbalance_ip teaming policy matches.
That confuses me. I rebooted server2 and then server1 gets this message. Both server have two 10GB connection to two meshed switches. We do not use LACP. The health check is on both server completely green.
I did the reboot once again and the behavior was the same. It looks like the VMs are working fine all the time. I did a ping while rebooting on two VMs and a did not loose any packet.
Does someone know what here happens?
Regards Wolfgang
Good day sirs!
We have a bottleneck with RDS server.
Due to overload esxi host, we can not create cluster for now.
There is more than 100 connections per VM.
So the problem is very high ping latency to it:
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 2015ms, Average = 247ms
Is there any ways, to increase bandwidth?
We have more than 2 free nics, m.b. we can use it?
ESXi 6.0
vCenter Server 6.0
We have a new test/dev project starting that will be looking to use
some of the latest vSphere Software including NSX, Horizon View, etc. Part of the new project is to investigate and
use a physical tap as well as virtual taps for testing a data center
deployment. We will be ordering new hardware to support the testing but need to
spec the hardware for number of physical NICs. RAM, etc
In the meantime, we have a few dell R620s in a vphere 6.0 cluster we
are testing with. We have an old tap called Gigabit Copper Aggregator nTap (image below) we are using..
We will be getting a Net Optics iLink Agg 1u physical tap
We will want to capture traffic at various points coming into the datacenter
From the physical switch(s)
Traffic from a firewall or IDS
Traffic between VMs
Traffic between ESXi Hosts
Etc.
I have no experience in taps in the vSphere environment.
The questions I have are
what are the total number of physical nics required for using taps on an ESXi host
the proper way to set up a physical tap and
the proper way to set up a virtual taps
I am not sure this is correct, but as a test for physical tap, I currently
have created three virtual switches, A1 , B1, and C1 tied to three physical nics on one of the ESXi servers.
Each switch has promiscuous mode enabled.
There are VMs on the A1 and B1 switches. In the attached diagram on the mTap, I have
switch A1 going to Port A on the nTap, switch B1 going to Port B on the nTAP,
and switch C1 going to Port A/B on the nTap.
Would this filter all traffic from A1 and B1 to C1? How is the traffic gathered – I assume by
wireshark or some appliance?
Can I remove the nTap and set up a virtual taps so I capture traffic on C1?
Hello,
For a specific application (Suse OpenStack Cloud), I need a linux VM with only one physical interface (eth0), multiple virtual interfaces (eth0.100, eth0.200), and the following configuration:
- eth0 sends untagged frames
- eth0.100 sends tagged frames, vlan ID 100
- eth0.200 sends tagged frames, vlan ID 200
etc
How can I implement that with dvPortgroups ?
If the "vlan type" of my dvPortgroup is "VLAN ID", only eth0 will work.
If the "vlan type" is "VLAN trunking", only eth0.100 and eth0.200 will work.
How can I mix both tagged and untagged trafic ?
This is running in an Cisco UCS environnement.
Thanks in advance for your help.
Hi All-
I am building out a new 4 host vSAN cluster from scratch. Each host has:
- Two 10G Copper NICs - will be teamed for production vm traffice
- Two 1G Copper NICs - will be used for management and redundant management (seperate switches)
- Four 10G Fiber NICs - 2 will be teamed for vMotion, 2 will be used for vSAN (acitve/standby)
My question concerns the Distributed Switch. How many uplink ports should I specifiy when creating the switch (default is 4)? Should I used 8 - the total number of NICs per host? If so, how to divide them up based on above design?
I have attempted once already to add the hosts to my newly created dvSwitch with not great results. In the Add and Manage Hosts wizard of the dvSwitch, I added all 4 new hosts and chose to "Manage Physical Adapters" and "Manage VMKernel Adapters". I then click "Assign Uplink" for all 8 NICs and map them to the physical NICs in my hosts (which I have renamed based on their purpose). I then apply to all hosts. I then assign the vmk0 that currently resides on vSwitch0 to the new Distributed Port Group and apply to all. What happening is once I click finish, I'm losing management connectivity to all but one host. I'm not understanding what I'm doing wrong.
Really confused on how to set this switch up, assign NICs as noted above, etc. Any help is appreciated. I've attached a PIC if that helps. Thanks.