Quantcast
Channel: VMware Communities : All Content - VMware ESXi 5
Viewing all articles
Browse latest Browse all 18761

ESXi 5.1, IBM DS3200 SAS - high disk latency with 2 hosts running

$
0
0

(2) IBM x3650 hosts running esxi 5.1 with 2xHBA SAS connections to IBM DS3200 dual controller SAN.

 

I have 2 arrays, both set to multi-host access for vmware.

 

With a single host powered on and running all vm's, the disk performance seems normal.

 

As soon as I power on the second host, the latency on host-1 soars (as high as 500ms!!), even though there are *no vm's* on host-2!!!  If I power down host-2, latency on host-1 returns to acceptable levels.  Again, there should be *NO* disk activity caused by host-2 since it has no vm's and boots from internal storage.

 

I've motioned all vm's to host-2 and the symptoms are the same - disk latency is fine with all vm's on host-2 and host-1 powered off.  If I power host-1 back on, the vm's running on host-2 grind to a near-halt due to latency.

 

The only oddity I've noticed is the path for one of the LUN's is different than what I think it should be.

Host-1

hba1 - runtime name: hba1:c0:t0:L1

hba2 - runtime name: hba1:c0:t0:L1   <-- note, this shows the same runtime name as hba1

manage path: hba1:c0:t0:L1

 

Host-2

hba1 - runtime name: hba1:c0:t0:L1

hba2 - runtime name: hba1:c0:t0:L1  <-- note, same

manage path: hba2:c0:t0:L1  <-- notice how this says hba2, not hba1 (hba1 is "standby")

 

One additional note:

Before the second esxi 5.1 was added, there was a Windows bare-metal host connected to the SAN, with the esxi host accessing 1 array, and the Windows host accessing the other array.  In that configuration, disk latency was also normal.  The Windows host has been removed and disconnected.  I also removed the host group in the Storage Manager.

 

Stumped...


Viewing all articles
Browse latest Browse all 18761

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>