Quantcast
Channel: VMware Communities : Unanswered Discussions - vSphere™ vNetwork
Viewing all 1365 articles
Browse latest View live

Consolidating clusters in multiple vCenter to one vCenter current active vCenter (each vCenter has its own vDS)

$
0
0

Scenario:

I have 4 separate vCenters. Each has only a couple clusters of 10 ESXi 6.0u3 hosts. Each vCenter has a couple of Distributed switches (VM Guest & vMotion) and they are all setup differently. I want to consolidate the clusters from vCenters B, C, and D to vCenter A.

 

Here is the part I am trying to figure out. The VM guest vDS setup on vCenter A is using ports 20 through 60 across 4 port groups. VM guest vDS on vCenter B, and C, and D are also using ports 20 through 60 in their port groups.

 

I exporting the vDS config from vCenter B and importing it into the vCenter A telling it to preserve everything. I did a quick look at the port groups in the VM guest vDS and I noticed they had been give 61 through 101. I then created the cluster and connected the first host from vCenter B to vCenter A. At this point the VMs on the host are still running, but the ESXi host is not connected to the vDS I imported. I go in through the webclient and reconnect the host back VM guest vDS I imported. The VMs immediately lose connectivity to networking (The VM NIC shows Invalid backing). I am thinking this is because the VMs are looking for ports in the 20 through 60 range and the vDS was assigned port numbers 61 through 101.

 

I theory is VMs are not reconnecting back to the port groups because they cant find the port number they were using on the other vCenter?

 

I have read some articles out there that say I should connect all the ESXi hosts to the new vCenter then import the vDS. I assume I would run into the same issue. The imported vDS will grab up the next set of ports in the range and I will be back to square one. The VMs will disconnect from the network because the port numbers don't match up.

 

Am I missing something or am I up  the creek without a paddle and will have to migrate hundreds of VMs over to the port groups using the webclient migration tool?

 

NOTE: Most of the VMs in my environment have multiple NICs for Prod network, Enterprise backup and restore, and NAS. This would be a very tedious process.


vSphere Networking limitation I discover

$
0
0

Hello Everyone,

 

Thank you for your time. Today I want to disccuss some limitations I discover during the deployement of my envirement, I would love to have your feedback. This is the bic picture :

 

Hardware : 

I Have 5 hosts with ESXI 6 Ent + with 4 vmnic each, 1 vCenter standard.

 

Networking :

I have 4 PHYSICAL switch, each of them is used respectivly for Management, vMotion, Storage, Production only. A router is connected to every PHY switch.

I have a vDS that have 4 PortGroup (Management, vMotion, Storage, Production). Each Portgroup is using a dedicated vmnic linked to a dedicated PHY switch.

The TCP/IP configuration is configured for vMotion Provisioning and Default .

 

ESXI configuration:

Each ESXI's vmnic is connected to a PHY switch.

 

The limitation:

Back to the networking principales, normaly when you configure a vmk port with an IP, the network subnet should be added to the routing table, and no need to cross a router if the destination is in our network subnet ? when I did some CLI on the ESXI:

I figured out that the host have only 1 default route which is the management. So when ever I try to connect to a datastore for example, it uses the management network while i have a dedicated vmnic for that purpose and the storage provider is in the same subnet in the same switch. The ESXI do not knonw how to get to it based on it routing table.

 

Solution (or DIY): manual adding of the routes ...

 

Please share with me your opinion. Maybe I'm missing something important. The finnal goal is to use the PHY switch 1 to connect to the storage, also use the PHY switch 2 to do the vmotion...

How to migration kernel port from VSS to VDS

$
0
0

My server has only one physical Nic,How to migrate management network(vmkernel) from VSS to VDS.

A question from a Newbie - Distribution Switch

$
0
0

Hi, Everyone, I am studying for VCP6-DCV foundation exam and I have a question regarding the Distribution switch configuration,

 

is it possible to have multiple port groups each connected to different uplink in a single vswitch?

 

 

i.e let's assume that we have 3 virtual port groups, PG1 it has 2 vm's and PG2 it has 2 vm's and PG3 it has vmkernel port.

 

is it possible to connect PG1 to upkink1 and PG2 to uplink2 and PG3 to uplink 3?

 

I am a little bit confused,

 

please help

 

 

Thanks

 

Mohammed

Is LACP/etherchannel

$
0
0

What is the best network and VMware config that will be near LACP/etherchannel capability in terms of redundancy if standard vswitch will be used.

Vsphere 6 is the version being used

Problem to connect a VM to another network range

$
0
0

Hi, this is my plan, and I want to connect my VOIP server (192.168.113.6) to my FXO (192.168.114.2) through my working tunnel, every NIC in my ...113.1/24 can ping any device in ...113.1/24 network plus the FXO in ...114.1/24 network and vice versa , except my voip server.  my VM voip server can just ping all NiCs in ...113.1/24 network and pinged back from them but cannot ping 192.168.114.2 or pinged back from my FXO. my VOIP Server is in esxi and connected to a vSwicth. what can I do for my problem, is there any firewall in esxi that is dropping my packets or whatever else... .

thanks

this is my plan

voip problem.jpg.jpeg

Initial LACP Configurations with New Hosts

$
0
0

Good Afternoon,
I am trying to understand the best course of action when installing a new ESXi host and then planning to move it to the VDS leveraging a LACP LAG.  The migration to the VDS and all of that is really straight forward.  My questions revolve around "pre-configured" ports on a Cisco switch that are already both in the LACP bundle.  lets use one host for crude example.

 

Install esxi, set IPs, VLAN tag and then cable in my 10G links to my ToR switches.  Well since the host is not online yet and not part of vcenter or the VDS how would I get this host online if both uplinks were in a pre-configured LACP configuration?

 

Would I just cable in 1, 10G link and then set the VSS to route based on IP hash?  Then swing that interface over to the VDS and LACP?  Been thinking about this quite a bit today and curious if there is a way to achieve this without disabling half of that LACP config on the switch and making that a plain old trunked interface?  Any input would be appreciated.

Help Needed for Network Load Sharing (vSwitch or DVS)

$
0
0

Hi everybody,

 

Hoping to get some advice about network load balancing using vSwitch or Distributed Switch (DVS - not sure what is the proper abbreviation now for this). Some info about the environment first:

  • New deployment of vSphere 6.5 with Enterprise Plus license
  • Several servers (same specs) with multiple 10Gb ports
  • Stacked 2x Dell S4048-ON switches

 

Part of the requirements is to use active-active multi-nic for all network traffic. We're not expecting link aggregation like 2x 10Gb links = 20Gb link, but we want to see the network traffic coming from multiple sources flow through different ports in and out of the ESXi hosts.

 

I configured several DVS with 2x 10Gb ports for some vSphere services (NFS, vMotion, Prod, etc) but observed the following in esxtop:

 

  1. DVS + LACP (1 LAG with 2 uplinks; route based on ip, tcpip, virtual port) + Switch LACP = only 1 port gets network traffic (port failover works)
  2. DVS + LACP (same LAG as #1) + NO switch LACP/portchannel (plain switchport) = connection lost (cannot even ping)
  3. DVS + 2 Active Uplinks (route based on IP hash; not LACP LAG) + Switch LACP = only 1 port gets network traffic (port failover works)
  4. DVS + 2 Active Uplinks (route based on IP hash; not LACP LAG) + NO switch LACP/portchannel (plain switchport) = 2 ports used (port failover works)

 

Need some configuration advice how to get active-active multi-nic on the virtualization hosts.


New driver for Xeon D - "VMware ESXi 6.0 ixgbe 4.5.3 NIC Driver for Intel Ethernet Controllers 82599,x520,x540,x550,and x552"

$
0
0

The 4.5.3 driver download page surfaced today, with the driver VIB bundle ixgbe-4.5.3-2494585-6925533.zipdated October 23 2017.

Curious, how is testing is going for other folks? See my (unofficial) tinkering going on, detailed here.

UDP performance of a connection restricted to under 1Gbps between Esxi 6.0 Servers

$
0
0

Hi,

 

I have 2 Esxi 6.0 servers. There is on Debian Linux virtual machine in each of these servers. The Linux virtual machines are connected by a 10Giabit X540 AT2 network adapters (back to back) . A single connection UDP throughput does not go beyond 850Mbps. However with multiple UDP connections can go beyond 850Mbps (with 3 connection it is 3 * 8950 Mbps). The maximum throughput achieved is around 4Gbps (a single tcp or with mulitple UDP connections).  A standard switch is created in each Esxi Serverr for the network adapter.

 

Whey is single UDP connection throughput limited to 850Mbps where as single TCP connection can reach 4Gbps?

LLDP not working with Intel X710 cards

$
0
0

Hi. Is anyone having issues with LLDP not working with Intel X710 10gb?

 

We just got a batch of Dell PowerEdge 730s with a Dual-Port Intel X710 card and a Quad port Motherboard card which is a dual-port Intel I350 and a dual-port Intel X710 jammed on 1 card. LLDP works for the 1GB ports on those. But those use a different driver obviously. The 10GB ports use the i40en.

 

The box has the last firmware, bios, drivers, etc from Dell. And VMware. And since the HCL doesnt totally agree on the driver between VMware and Dell (when does it ever) I tried both.

 

Interesting CDP works on these cards at another one our sites that uses Cisco.

 

I found this link which was interesting, but no one answered the ESXi question on it.

 

X710 dropping LLDP frames ? |Intel Communities

 

Thanks you any help you can provide. Thanks,,,

VDS -> DVUplinks -> Vendor configuration

$
0
0

I can't seem to find any documentation regarding the "Vendor configuration" Allowed/Disabled selection in the DVUplinks settings page.

What function does the "Vendor configuration" setting control? (re: VDS 6.5)

 

Thanks, Eric

mtu to 9216?

$
0
0

Hi ,

Is there a way to change the mtu to 9216?

 

ESXI is not letting the guest OS to go more than MTU size of 9000 even though physically it can support upto 9216, in a SRIOV scenario.

 

Guest OS asks for MTU 9216:

2017-11-17T06:49:22.089Z cpu2:36539)<6>ixgbe 0000:09:00.0: vmnic8: Guest OS requesting MTU change to 9216

2017-11-17T06:49:22.089Z cpu5:36537)<6>ixgbe 0000:09:00.1: vmnic9: Guest OS requesting MTU change to 9216

2017-11-17T06:49:22.089Z cpu9:36536)<6>ixgbe 0000:01:00.1: vmnic7: Guest OS requesting MTU change to 9216

 

Thanks,

Suma

How to disable DST MAC check on ESXi?

$
0
0

ESXi SR-IOV question - Is it possible to disable DST MAC check on ESXi?

tried several solutions , but they don’t  seem to work: 

pciPassthru13.noForgedSrcAddr false

pciPassthru13.downWhenAddrMismatch false

pciPassthru13.ignoreMACAddressConflict true

pciPassthru13.noPromisc false

pciPassthru13.checkMACAddress false

 

Thanks,

Suma

How Virtual Machine Traffic Routes

$
0
0

This question is well explained in the community doc Understand How Virtual Machine Traffic Routes  . However it is missing the scenario of 2 VMs on 2 different hosts, but same vDS (same port group). Both hosts connected to different physical switch.

 

For VM1 (on Host 1 ) to reach to VM2 (on host 2), will it go out from VMNIC (physical) on host 1 out via physical switches?

 

Thanks in advance.


VM traffic same ESXi Host

$
0
0

Hi all,

VMs on the same Host

in Portgroup_VMs.

10.11.11.x IPs

Why the traffic goes through the 10gig vmnics and not through the 1gig nic?

 

vSwitch0

1.Portgroup_VMs = VLAN 0 (none)

vmnic0 (speed 1000 Full) = observed ip range 10.11.11.1 - 10.11.11.125

vmnic1 (speed 1000 Full)= observed ip range 10.11.11.1 - 10.11.11.125

 

vSwitch1

PortGroup_iSCSIMgmt = VLAN 55

VMkernelPort_vMotiom = VLAN 20

VMkerelPort_NFS = VLAN 10

vmnic2 (speed 10000 Full) = 172.25.25.1 - 172.25.25.225 (vlan 10)

vmnic3 (speed 10000 Full) = 172.25.25.1 - 172.25.25.225 (vlan 10)

 

Thanks for help

Distributed switch configuration

$
0
0

I am commencing my first deployment of Distributed switches. I will do this in my lab. Can someone please share configuration guide from basics.

vss to vds migration

$
0
0

hello

 

i have a cluster of two hosts with vSS configuration and i would like to migrate the vSS to vDS.

lets say that i have one port group on the ESXi01 that is connected to vmnic0 , and the same portgroup on ESXi02 that is connected to vmnic1 and they are configured with vLANs.

after creating the vDS and creating the port, should i link the port group to uplink1 and uplink2 in the vDS?

so the virtual machines on ESXi01 would have connectivity to vmnic0 and vmnic1 but will only use vmnic0 as vmnic1 of ESXi01 supports other vlan ?

and the virtual machines on ESXi02 would have connectivity to vmnic0 and vmnic1 but will only use vmnic1 as vmnic0 of ESXi02 supports other vlan ?

 

is that correct ? any suggestions ?

thank you experts

cisco 4500 vss and vmw

$
0
0

Hello all.

This is fifth day that I'm trying to solve how to correct configure vmware and cisco vss with lacp. (Previously I tried to use etherchannel with the same effect) Also other LACP groups (not with vmware works perfectely on this vss switches.

 

I configured two channel-groups for each server connected to cisco and when I enable interfaces in this groups on second standby switch, I loose connectivity to one of the virtual machines, I can connect to it from one server but not from another so lacp not working as expected.

What i missed in configuration. Any help will be very helpful. Below my configuration.

 

So I have cisco 4500x in vss configuration with vsl links (Active/Standby), I configured port-channels and aggregate two pairs of interfaces with corresponding channel-groups for hosts connected to them:

 

interface Port-channel5

switchport

switchport trunk allowed vlan 200

switchport mode trunk

mtu 9000

spanning-tree portfast trunk

spanning-tree bpduguard enable

!

interface Port-channel7

switchport

switchport trunk allowed vlan 200

switchport mode trunk

mtu 9000

spanning-tree portfast trunk

spanning-tree bpduguard enable

 

 

interface TenGigabitEthernet1/1/5

description dell1.eth1

switchport trunk allowed vlan 200

switchport mode trunk

mtu 9000

channel-protocol lacp

channel-group 5 mode active

spanning-tree portfast trunk

spanning-tree bpduguard enable

 

 

interface TenGigabitEthernet1/1/7

description dell2.eth1

switchport trunk allowed vlan 200

switchport mode trunk

mtu 9000

channel-protocol lacp

channel-group 7 mode active

spanning-tree portfast trunk

spanning-tree bpduguard enable

 

 

interface TenGigabitEthernet2/1/5

description dell1.eth2

switchport trunk allowed vlan 200

switchport mode trunk

mtu 9000

shutdown

channel-protocol lacp

channel-group 5 mode active

spanning-tree portfast trunk

spanning-tree bpduguard enable

 

 

interface TenGigabitEthernet2/1/7

description dell2.eth2

switchport trunk allowed vlan 200

switchport mode trunk

mtu 9000

shutdown

channel-protocol lacp

channel-group 7 mode active

spanning-tree portfast trunk

spanning-tree bpduguard enable

 

When ports enabled I get the following status for the ports:

 

Te1/1/5   dell1.eth1         connected    trunk        full a-1000 1000BaseT

Te1/1/7   dell2.eth1         connected    trunk        full a-1000 1000BaseT

Te2/1/5   dell1.eth2         connected    trunk        full a-1000 1000BaseT

Te2/1/7   dell2.eth2         connected    trunk        full a-1000 1000BaseT

Po5                          connected    trunk      a-full a-1000

Po7                          connected    trunk      a-full a-1000

 

Also I have two hosts with esxi 6.5, and there I configured vDS with LACP group

here is LAG config:

both hosts connected to upstream switch with LAG

host1:

host2:

 

Portgroup:

 

And some outputs from esxi state lacp:

and also question why Admin Key and Oper Key the same on differents hosts in LACP group?

host1:

crc-DSwitch

   DVSwitch: crc-DSwitch

   Flags: S - Device is sending Slow LACPDUs, F - Device is sending fast LACPDUs, A - Device is in active mode, P - Device is in passive mode

   LAGID: 469272483

   Mode: Passive

   Nic List:

         Local Information:

         Admin Key: 9

         Flags: SP

         Oper Key: 9

         Port Number: 32769

         Port Priority: 255

         Port State: AGG,SYN,COL,DIST,

         Nic: vmnic1

         Partner Information:

         Age: 00:00:02

         Device ID: 02:00:00:00:00:0a

         Flags: SA

         Oper Key: 7

         Port Number: 23

         Port Priority: 32768

         Port State: ACT,AGG,SYN,COL,DIST,

         State: Bundled

 

         Local Information:

         Admin Key: 9

         Flags: SP

         Oper Key: 9

         Port Number: 32768

         Port Priority: 255

         Port State: AGG,SYN,COL,DIST,

         Nic: vmnic0

         Partner Information:

         Age: 00:00:04

         Device ID: 02:00:00:00:00:0a

         Flags: SA

         Oper Key: 7

         Port Number: 7

         Port Priority: 32768

         Port State: ACT,AGG,SYN,COL,DIST,

 

         State: Bundled

host2:

crc-DSwitch

   DVSwitch: crc-DSwitch

   Flags: S - Device is sending Slow LACPDUs, F - Device is sending fast LACPDUs, A - Device is in active mode, P - Device is in passive mode

   LAGID: 469272483

   Mode: Passive

   Nic List:

         Local Information:

         Admin Key: 9

         Flags: SP

         Oper Key: 9

         Port Number: 32768

         Port Priority: 255

         Port State: AGG,SYN,COL,DIST,

         Nic: vmnic0

         Partner Information:

         Age: 00:00:04

         Device ID: 02:00:00:00:00:0a

         Flags: SA

         Oper Key: 5

         Port Number: 21

         Port Priority: 32768

         Port State: ACT,AGG,SYN,COL,DIST,

         State: Bundled

 

         Local Information:

         Admin Key: 9

         Flags: SP

         Oper Key: 9

         Port Number: 32769

         Port Priority: 255

         Port State: AGG,SYN,COL,DIST,

         Nic: vmnic1

         Partner Information:

         Age: 00:00:01

         Device ID: 02:00:00:00:00:0a

         Flags: SA

         Oper Key: 5

         Port Number: 5

         Port Priority: 32768

         Port State: ACT,AGG,SYN,COL,DIST,

         State: Bundled

Question on vSwitches and physical upstream switch failure

$
0
0

Hello,

 

I would like a question in relation to the behavior that I would experience in this situation:

I have two NICs in the following configuration using the default load balancing scenario

“Route based on originating virtual port”

 

According to this article when this policy is in use only one Vmnic is active and the other one is passive

https://kb.vmware.com/s/article/2006129

 

Let’s say that the Vmnic 6 is the active NIC and the upstream physical switch port on S1 below fails or the cable is pulled and that physical switch S1 gets completely isolated

Since the Network Failure Detection is set to “Link Status Only” my Vmnic 6 will not switch the traffic to the other Nic Vmnic 2 since it’s not aware that the upstream link

is down: I’m guaranteed that I lose all network connectivity from that host. Is that right ?

 

I know I can use Link Status and Beacon Probing , Link Failure Detection policies in the physical switch etc. but I was interested in the behavior of that particular scenario above.

 

 

Thanks a lot for any clarifications

Fabio

Viewing all 1365 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>