Quantcast
Channel: VMware Communities : All Content - VMware ESXi 5
Viewing all articles
Browse latest Browse all 18761

vmxnet3 trouble in OpenSolaris

$
0
0

Due to a new Intel X-540 10GbE network adapter in my lab system I started to do some performance measuring.

The numbers I got for various setups were a little confusing in the first instance but finally pointed me into the direction of the vmxnet3 virtual network adapter that was involved in the one or other way in almost all my setups.

 

Here we go.

 

Host: ESXi 5.1 build number 799733 patched with ESXi510-201212001 build number 914609

 

VMs:

Linux:           Ubuntu 12.10 64-bit

OpenSolaris: OmniOS 151.004-stable

all with latest VMware-Tools

 

Test tool: iperf 2.0.5

 

Network layout: VM1---vmxnet3---vSwitch---vmxnet3---VM2

 

Setup 1:

VMs: 2 x Linux

 

MTU 1500: 29 Gb/s

MTU 9000: 26 Gb/s

 

 

Setup 2:

VMs: 2 x OpenSolaris

 

MTU 1500: 5 Gb/s

MTU 9000: First of all this KB article to enable jumbo-frames in a Solaris guest didn't work at all :

 

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2012445

 

I had to edit the "MTU=" line in /kernel/drv/vmxnet3s.conf and change the value of 1500 to 9000 in the appropriate position before any values above 1500 were accepted.

 

However, iperf did hang and I couldn't get a throughput number.

 

My impression is that the vmxnet3 virtual network adapter for Solaris is not very well implemented.

It lacks the performance of it's Linux companion and it is lousy documented.

 

I'm not sure if the problem with iperf for MTU=9000 was related to vmxnet3 or iperf.

 

At least the simple tests above did explain the performance numbers of all my other setups where the physical Intel X-540 adapter was involved.

Passed through to the Solaris VMs I did see round about 9 Gb/s between the VMs. With 2 Linux VMs it was even closer to 10 Gb/s

In setups where one or both VMs were connected via a vmxnet3 adapter the limiting factor was always the X-540 when Linux was the OS.

For example in a setup like this: VM1---vmxnet3---vSwitch1---X540---X540---vSwitch2---vmxnet3---VM2

and throughput was round about 9 Gb/s.

 

With OpenSolaris the bottleneck was the vmxnet3 adapter and throughput was always below 4 - 5 Gb/s; sometimes well below.


Viewing all articles
Browse latest Browse all 18761

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>