Quantcast
Channel: VMware Communities : All Content - VMware ESXi 5
Viewing all 18761 articles
Browse latest View live

/dev/sda sdb disk to vm disk mapping

$
0
0

how do I find out which vmdk disk is mapped to which /dev/sdxx disk on linux?


Esxi5 has no ifconfig

$
0
0

I would like to test ESXi5 hypervisor. I wanted to test some of the commands I'm used to and immediately discovered that I could not use ifconfig. Where is ifconfig?

vSphere Client cannot connect to ESXi Server on Workstation 9

$
0
0

I am setting up a vm lab for testing at my home. I installed Workstation 9 on my home PC with Windows XP Professional, and installed ESXi in Workstation 9, then I installed vSphere Client on the same Winows XP Pro, but can not connect to the ESXi Server.

 

- The Windows XP can ping ESXi Server

- The ESXi welcome page showed up on the brower when type in the ESXi Server IP address

 

The error message I am getting when connect to ESXi Server is following

 

"vSphere Client could not connect to 192.168.159.129. An unknown connection error occurred. (The client did not receive a complete response from the server. (The underlying connection was closed: An unepected error occurred on a receive.))"

 

I have tried to reset the Server Management Agent in the ESXi, reboot the ESXi and disable all the firewall and anti-virus apps in the Windows XP Pro, but no luck.

 

Please HELP!!! Thanks in advance!!!!

ESX Host Max Network Throughput Ether-channel

$
0
0

HI all..

I have a question regarding Maximizing throught put of my ESX servers..For those intensive applications like SCCM when 1000 clients try to get Updates simultaniously...

 

I have been reading up on this and I want to know what type of load balance I should implement.

Server has 8 x 1gb NICS...my switches have 20GB ether-channel trunks between the switches..

 

From my understanding :

Load balancing Basd on Port ID is limites to the maximum throughput one Nic. (in my case 1GB)

 

 

so can anyone shed some light on this issue.....

Multiple .vmdk but No Snapshots

$
0
0

ESXi5.1

4 VMs

 

 

One of my VMs has multiple .vmdk files, as follows:

 

sf2000-2.vmdk                    84GB

sf2000-2-000001.vmdk          84GB  eager zeroed thick

sf2000-2-000002.vmdk          84GB  eager zeroed thick

sf2000-2-000003.vmdk          84GB  eager zeroed thick

sf2000-2-000004.vmdk          84GB  eager zeroed thick

 

 

There are no snapshots and no provisioned space for any of the .vmdk files.  This one VM is using 401 GB of HD space.  Can someone tell me why I have these extra .vmdk files and is there a safe way to delete them?

 

Follow-Up:

 

I just noticed something in the properties section if it helps.  Hard Disk 1 shows as sf2000-2-000004.vmdk and the Virtual Device Node shows as IDE (0:0) Hard Disk 1.  All of my other VMs show the Virtual Device Node as SCSI (0:0) Hard Disk 1.

 

Why would the Virtual Hard Disk be showing as sf2000-2-000004.vmdk and not sf2000-2.vmdk?  Attachment of VM details below.

 

 

Follow-Up 2:

 

I experimented with moving sf2000-2.vmdk, sf2000-2-000001.vmdk, sf2000-2-000002.vmdk, and sf2000-2-000003.vmdk to another folder on the datastore.  After rebooting the VM it still worked.  Somehow, the files have gotten screwed up.  Is there a way to get the naming back to what it should be with sf2000-2.vmdk as Hard Disk 1?  That would entail having to tell ESXi that Hard Disk 1 is sf2000-2.vmdk and then renaming sf2000-2-000004.vmdk to sf2000-2.vmdk.  Any thoughts on how to do this?

 

Thank You,

stmux

 

Message was edited by: stmux

Software iSCSI HBA, NetApp, and NIC failover (*crash*)

$
0
0

Hello - I am vSphere newbie and reading thru all documentation.

 

Ran into crash situation on ESXi 5.1 host yesterday:

 

1. NetApp storage on backend direct-connect to SAN switch. NetApp filer heads setup to partner with each other; primary IP is 172.28.4.160/24 (no gateway). Primary IP always available even if filer head goes down (sounds similar to cluster IP).

 

2. Allocated NIC2 and NIC 6 from each ESXi host for iSCSI.

 

3. Direct connect from ESXi host NICs to SAN switch. Defined iSCSI VLAN 240 for the physical switch (switchport mode trunk).

 

4. Created dvSwitch and set MTU to 9000. Created 2 dvUplinks for the dvSwitch and assigned ESXi host's NIC2 to dvUplinkStorage and NIC6 to dvUplinkStorageR. Created 2 dvPortgroups: dvPgStorage (VLAN 240) and dvPgStorageR (redundant, VLAN 240). Ensured that teaming policy for each portgroup has only the appropriate active dvUplink and that all other uplinks are set to "Unused". dvPgStorage has dvUplinkStorage and dvPgStorageR has dvUplinkStorageR.

 

5. Created 2 VMkernels on the ESXi host. vmk1 has IP of 172.28.4.90/24, MTU 9000. vmk2 has IP of 172.28.4.100/24, MTU 9000. Verified jumbo frames enabled and working via "vmkping -4 -d -I vmk1 -s 8200 172.28.4.160".

 

6. Created iSCSI software HBA on ESXi host (vmhba34). Added VMkernel vmk1 to vmhba34 network adapters. Added iSCSI target 172.28.4.160:3260.

 

7. In NetApp System Manager, created three LUNs (ID 1, 2, and 3) and an iGroup with the ESXi host IQN added (from vmhba34 properties).

 

8. Returned to VIC and verified that all LUNs (1, 2, and 3) showed up for the ESXi host under Storage Adapters.

 

9. Under ESXi host Storage tab, added datastore for the iSCSI LUN #2.

 

10. Provisioned VM to that datastore and started the VM. No problems - provisioning was fast (at least 150MB throughput from informal measurement of 1GbE NIC using dvSwitch portgroup monitoring) and the VM start was as fast as from local disk.

 

11. To enable failover: Added *second* VMkernel vmk2 to vmhba network adapters.

 

That last step is where the crash occurred. Immediately I could not get to the running VM.

 

I was able to connect back to the failed ESX host immediately. I went to the Storage Adapter for the ESXi host and saw that LUN 2 was marked as "dead" (grayed out). Interestingly enough, the Storage Adapter still showed both VMkernels and everything else (including all other LUNs) with healthy, green icons.

 

I first removed the second VMkernel (vmk2, the one for dvPgStorageR) and then rebooted the failed ESX host.
On reboot the iSCSI storage adapter was fine - all three LUNs reported healthy including the one that had showed up "dead" before. Unfortunately the iSCSI storage mount hadn't persisted but I simply remounted the LUN #2 and - tada - was able to restart the failed VM with no ill effects discernable.
So why did everything crash? Especially since I had followed all of the rules?
I think I found the answer: vSphere Storage docs on page 117 speak to NetApp storage systems:
<cut>
When you set up multipathing between two iSCSI HBAs and multiple ports on a NetApp
storage system, give each HBA a different iSCSI initiator name.
The NetApp storage system only permits one connection for each target and each initiator.
Attempts to make additional connections cause the first connection to drop. Therefore, a
single HBA should not attempt to connect to multiple IP addresses associated with the same
NetApp target.
</cut>
I suspect this was the problem: I can't simply add a second VMkernel to the software iSCSI HBA with a NetApp backend. Because - the ESXi host will try and establish sessions for *each VMkernel* for multi-pathing.
Since there is only one software iSCSI HBA, then there is only one Initiator name. To NetApp it looks like duplicate sessions from the same Initiator - the first session is closed which - BAM - causes the iSCSI session to be reset, which causes me to lose my LUN mount, and so on.
Am I correct? If so, how can I perform failover at iSCSI level with NetApp using software iSCSI on the ESXi host? Because - you can't create *2* iSCSI software HBAs only one.

 

Thanks for any advice. I've attached the /var/log/vmkernel.log for reference (failures up to the point where I performed a reboot).

 

 

 

6. Defined the ESXi

ESXi 5.X drivers

$
0
0

I am trying to find how to identify the default drivers included with ESXi 5.X and how they compare to 3rd party drivers such as HP-Offline bundle. I understand the HP-Offline bundle will have enhanced CIM providers and maybe very specific driver features. What is vmware including in ESXi by default? Shouldn't they have a driver built in by default for everything they have certified and support? What are some other advantages to using a 3rd patry driver bundle or even the the HP version of the ESXi image. Any info or documentation is appreciated. I have done a fair amount of searching. While there seems to be an ok amount of information about HP offline bundle and how to slip it into your iso, there seems to be a lack of information regarding the default drivers in ESXi.

HP4300 G2 SAN and getting a 10gb nic

$
0
0

hey all. We are looking at getting more performance out of our san. Currently we have 4 HP DL 360's running ESX 5.1 with around 40 vm's. On the network side for the san each ESX host has 2 nic's on the SAN vlan. Our san is 2 hp 4300 g2's in a mirror. So really only 1gb nic from each is connected to the SAN vlan. So would getting a 10 gb nic for each 4300 really help out with performance to and from the SAN?


VMware Tools event log 1000

$
0
0

I just upgraded from esxi 5.0 to 5.1 to get rid of constantly crashing VMtools on my 2008r2 terminal server. Well, errors are back, just in different form.

 

At first I got this:

[ warning] [vmusr:vmtoolsd] Failed registration of app type 2 (Signals) from plugin unity.

 

By following instructions in http://communities.vmware.com/message/2110430 I got rid of it. But then another error came up, it is raised after any user logs in to terminal:

 

-<System>
<Provider Name="VMware Tools"  />
<EventID  Qualifiers="0">1000</EventID>
<Level>3</Level>
<Task>0</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2013-01-15T07:03:36.000000000Z" />
<EventRecordID>35299</EventRecordID>
<Channel>Application</Channel>
<Computer>pantera.----------</Computer>
<Security UserID="S-1-5-21-1238592200-306474511-1734353810-1820" />
</System>
-<EventData>
<Data>[ warning] [vmusr:vmusr] Channel restart failed [1]</Data>
</EventData>
</Event>
From terminal user point of view, nothing happens, everything works fine. But I hate having a server which fails to restart something 100 times per day.

will ESXi run on server with more than 32 GB memory?

$
0
0

I understand ESXi 5.1 is limited to/capped at 32 GB of memory, but will it run on a host if it does have more than 32 GB of memory?

esx 5.1 phys switch

$
0
0

helo, i am new to this world.

 

i want to connect one esx 5.1 and one with win 2008 r2 with hp procurve 2800 series.what can i do to connect with switch.configure the switch  ?because i cannot see esx through windows when they are with the switch connected.when they are connected point to point it works.what should i do in switch configuration to work this configuration?

 

thanks in advance!!

Security Question

$
0
0

Hi - hope you can answer this best practise question

 

Company mantra here has always been never to have servers that are physically connected to separate secuirity zones, thus bypassing the firewall - good policy.

 

Now we are looking at virtualisation I am trying to plan the network connectivity. Obviously, the VMotion and iSCSI traffic will have its own separate, isolated networks, but I am running into a problem with management traffic.

 

I think that routing the management network through a separate VSwitch with its own dedicated physical NIC onto our internal office network is safer than routing it out onto the DMZ where the virtual machines communicate, as this removes the possibility of an attack vector being accessible from outside our corporate firewall; but I am running into resistance as policy has always been to manage servers on the DMZ through the DMZ, which for physical Windows servers with no facilities for a separate management port was logical. In the virtual world, however, this is different with VSwitches to separate traffic, but here they are suspicious of relying on 'software' to secure such arrangements.

 

What is best practise please ?

 

Thanks

unmapped lun and remapped now showing as inactive

$
0
0

I unmapped a lun and then remapped to a esx cluster but now all the hots are showing this lun as inactive.

 

tried rescanning all hbas and refreshing but still showing as inactive.

 

any ideas?

esxi and gtx680

$
0
0

Hello.

I understand that there is no official support, but I want to try and I need help.

My test system

P8Z77-V

Core i7-3770

VGA-adapter GeForce GTX680 (display connected on DVI)

ESXi 5.1

 

Test on Windows 7 x64 and Ubuntu 12.10.

 

for Windows 7 x64 GeForce GTX 680 is recognized, drivers are installed correctly, the device manager shows that the GTX680 is running well. Monitor connected to the GTX680 when running VM goes out and says "no signal". NVIDIA ControlCenter says that the monitor is not connected. Disabling VmWare VGA device manager Windows to nothing lead.

 

Ubuntu 12.10. The whole setup is on the monitor connected to the GTX680 (!!!). But after installing the full situation is repeated with Windows 7.

 

You can try to do? Can somehow completely disable VmWare VGA? Why in the Ubuntu installation goes through the GTX 680, and then it goes astray? Why does Windows 7 x64 GTX680 looks like a fully working, but it does not switch?

 

Thank you for your answers and advice.

Sorry for my English.

Is ESXi/vSphere the right product for me?

$
0
0

Hi:

I am new here and don't have experience with VMware, so I respectfully request patience if my questions are simple or inapproriate for this particular sub-forum.

 

We have an internal test lab that uses the free version of Hyper-V.  The lab isused to test versions of our software product, though the sales and support teams also use it. Due to lack of support for OpenGL 1.5 in Hyper-V, we need to move to another solution.

 

Our needs for the QA test lab include:

  1. Need to run isolated operating systems as VMs on a Dell R710 server with 128 GIG RAM, 2 XEON 5650 processors and 1.7TB disk place.  We may get a second machine someday but for now, there is one server for each office and there is no WAN connection or shared domain/forest between them. 
  2. We need to support about 32 VMs on the system above, give or take.  They all can be separate VMs, or some can be a differential disk sort of thing.  At any given point in time, maybe 10-12 are being used interactively.  Though some VMs run only product test automation, most are used by the engineering staff to test the product manually.  We have WinXP -> Win8, 32/64, English/Spanish/etc.
  3. **** Important**** The VMs need to support OpenGL 1.5 at a min.  It looks like vSphere can support 2.1.  This product under test now requires OpenGL and will not run without now supporting at least version 1.5.  Hyper-V’s lack of support for OpenGL is the reason we are moving away from it.  Graphics performance is not important, but ability to run the application is critical
  4. We need the ability to restore Windows XP --> Windows 8 VMs on a schedule, via some sort of script or API.  We do this now via PowerShell in Hyper-V
  5. The VMs need to be part of a domain.  We add them manually to the domain today.
  6. Users need to be able to remote desktop to these machines in a separate Window of the desktop, or similar.  The remote machine cannot completely take over their desktop.  They need to switch back and forth on demand.
  7. Need to pass a Safenet-Inc (Aladdin) HASP USB licensing dongle- currently accomplished with a DIGI AnywhereUSB device.
  8. The VMs need to run as independent machines, meaning we have several VMs today that we restore on schedule, put the daily software build on them, and run automation as a scheduled task, with no user interaction.  We need to still do this.

 

 

Nice to Have

  1. A way to back-up critical machines.  Two VMs are critical as they run the test.  The rest would be considered test clients and though sad if we lose them, not the end of the world.
  2. Dynamic RAM for those operating systems supporting it.  Currently, with Win7/Win8 Enterprise, we can set RAM at 1GB and let it boost up to 32 or 64 GIG if needed.
  3. Multi-Core/VM.  We want to run more than 1 core on a VM.  We can run 4 under Server 2008 R2 SP1 Hyper-V
  4. VM creation on demand. Citrix has a Lab Manager. Is there something like this for VMware that is not overly expensive?  It seems like I need the Cloud Director Enterprise for 2x$11,000+.  That would have a hard time getting through the budget today.  I am not saying it could not be done, but it might be looked at as overkill.  Can the AutoDeploy from vSphere help?
  5. Ability to pass an USB licensing dongle via a user's client machine
  6. A nice differential disking system where we have one base OS that we can update later and not have to rebuild 15 vms.
  7. Web page where end users can request a VM
  8. Ability to use GPU pass through, ideally via the new nVidia virtual GPU coming out.

 

Current Pain Points:

  1. No OpenGL 1.5 support.
  2. Adding machines to the domain.  Time spent maintaining machines and getting Windows updates for each VM, then snapshotting again.
  3. Time required to bring new machines online.  We have to copy a VM, rename Windows name, add to domain, etc.  Currently, we have a base image which we copy, rename the Windows name, restart, add to domain, restart, get windows updates as there have been more since the original image, etc.
  4. Lab is experiencing large slow-downs since Windows 8.  The whole thing is slowing down and we have to restart the host almost weekly. 
  5. No VM creation on demand.  Many client users of the test lab are non-IT engineers and they don’t understand VMs.  They will not have access to control them.  If we had a webpage where they would request a VM and have it built with an expiration,, that would be great.  They currently ask the QA team for a VM and the QA team needs to stop their tasks, locate one that is free and tell the engineering staff which one to use.
  6. VMs that have not been restored recently grow in space and we need to restore them to dispose of the temporary avhd file.  I think this is thin provisioning in vmware.
  7. No user space quota for dynamic disk sizing.  Currently we use dynamic disks in Hyper-V and a few of the managers keep VMs around for a while and the disk size grows.

 

Thank you


vSwitch - Unicast - MS NLB

$
0
0

As reported in the post "http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1556"

unicast does not work and some settings should be done.

Unfo the settings do nothing to solve my issue.

traffic remains on one node of the cluster where the cluster is very gental set up with just 2 nodes and only as fail over conzept and not load balanced.

 

Funny enough that this should be mentioned in

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1006778

that where i just spend some days on my life on does not even work.

 

did someone got it working on an esx 5.1?

thx guys

stef

HP r/t3000 and ESXi 5.0

$
0
0

Hello, I have a question about  shutdown the virtual machines and free version of ESXi 5.0 with my HP R/T 3000.

 

The HP Power Manager is install on Windows server 2008 R2 and also connected with USB 2.0. I would love to get some script for shutdown first a virtual machines and then esxi thrue the network. Is that even possible? And if it is how?

 

 

I'm also noob at scripting so be patient with me.

SCP SSH not working between ESXi hosts

$
0
0

Hello,

 

I have two servers with ESXi 5, both I can access through ssh (putty) but I if try prompt the one of ESXi5 to other ESXi I can´t access, my ping works, I try use SCP between two but don´t work, see the error:

 

 

~ # ssh 192.168.0.251
ssh: connect to host 192.168.0.251 port 22: Connection timed out
~ # scp /tmp/testescp.txt root@192.168.0.251:/tmp
ssh: connect to host 192.168.0.251 port 22: Connection timed out
lost connection
PING 192.168.0.251 (192.168.0.251): 56 data bytes
64 bytes from 192.168.0.251: icmp_seq=0 ttl=64 time=0.157 ms
64 bytes from 192.168.0.251: icmp_seq=1 ttl=64 time=0.214 ms
64 bytes from 192.168.0.251: icmp_seq=2 ttl=64 time=0.187 ms
--- 192.168.0.251 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.157/0.186/0.214 ms
SSH is enabled.
I can connect using putty or Winscp from a windows machine.
Anyone have sugestion ?

PSOD on Dell R515 BIOS 2.0.2 - ESXi 5.0 Updatetd

$
0
0

Hi,

 

today i had a Purple Screen on a new Dell R515 with the latest BIOS (2.0.2).

We bought 2 of these machines in 2012 with BIOS 1.10.0 and had no problems.

 

Yesterday i installed 2 additional R515 with BIOS 2.0.2 (installed in factory) and one of the two crashed last night.

 

Are there BIOS settings which i should disable?

 

Like DMA Virtualization ?

C1E ?

 

Power Settings to High Performance in BIOS instead of OS-Control?

 

 

On one of the new machines i try to downgrade the BIOS an firmware to the versions of the first 2 machines and will test it.

 

Regards Michael

Number of VM s per cpu and core

$
0
0

hi all,

 

im currently working on esx and esxi i'm working via GUI adminstration. I'm not that deep in to esxi virtualization. My doubt it how many guests can i create on a server with 2 cpu and 4 cores each. Is there any calcuation and limitation about no of guest per core or cpu.  Please let me know how to plan it correctly i'm running blindly around 10-12 guests on 2 cpu server with 8 cores.

 

Thank you very much.

Viewing all 18761 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>