I kinda doubt it, but I'll post this anyway just in case. Maddening case at work and I haven't touched vmware shit in-depth for a long time so I'm pretty stumped. ================================== We have a pre-existing flexpod/ucs chassis with 20-ish blades and everything runs beautifully. The general setup on the blades is a standard vswitch0 which handles management port over two physical NICs, and then every other vlan (production, iscsi storage, vmotion) running on a distributed vswitch (with apparently 10-ish Cisco VIC Ethernet NICs serving as the "physical" adapters) Now we have a need to expand this cluster with standalone pizza-box style servers, outside the flexpod/ucs chassis. Our network team has trunked production, iscsi, vmotion, and management ports out to a Cisco Nexus switch where we've plugged in the new servers. New servers, if set up strictly with standard vswitches (management on vswitch0, and another standard vswitch for everything else), works perfectly... in terms of being able to run production traffic over prod vlan, and participate in iSCSI over storage network. VMs that are shut down can be moved over with no issues, proving all the cabling (and networking setup I believe) is correct. HOWEVER - vmotion won't work over standard vswitches by design right? So we can't vmotion things. We NEED to use distributed switches. And that's where the trouble is... if I add one of the new pizza-box hosts to the pre-existing distributed vswitch, everything seems to look okay, but no traffic seems to be going between the pizza-boxes and the Cisco Nexus switch. Logging into the Cisco Nexus switch, I can't even see the MACs of vmkernel ports created on the new server. I can't get iSCSI to work at all, despite proving that identical setup works fine on identical hosts running on standard vswitch. Any idea where to start looking here? Does this even make sense? Thanks in advance to any distributed vswitch gurus out there that could point me in the right direction.