Vmware Windows Cluster Shared Disk

VM Boot disk (and all VM non-shared disks) should be attached to a separate virtual SCSI controller with bus sharing set to none. Mixing clustered and non-shared disks on a single virtual SCSI controller is not supported. Multi-writer flag must NOT be used. VSphere Cluster. The following are the requirements of the VHD Set guest clustering virtual disk technology: The virtual machines need to be powered off to add a VHD Set virtual disk The VHD Set format is only supported on the guest operating system running Windows Server 2016; Compatible underlying storage for the VHD Set storage, i.e. Cluster Shared Volumes (CSV). The following script set contains the steps and introduction to perform the iSCSI configuration in the guest OS. Follow the steps on each Windows cluster node to get the disks ready for cluster services like SQL Server Failover Clustering Instance and Scale-Out File Server.

Guide: VMWare ESX – Shared Windows Cluster Disks

Posted by wakamang on August 14, 2009


This post will guide you trough the disk creation proces that needs to be followed to let virtual machines use these disks in a Windows 2003 cluster environment. Clustering in Windows 2003 is possible with Windows 2003 Enterprise Edition and Windows 2003 Data Center Edition.

For Physical servers you need some kind of a storage device to present one disk(LUN) to more than one server but with VMWare you can perform a little trick to present the disk to mulptiple virtual machines without the use of a storage device. This is ideal for test environments but not recommened or supported in production. In this article I will guide you trough the creation of these shared cluster disks

Vmware Windows Cluster Shared Disk Manager

There is a big difference in presenting a disk to a virtual machine and let it actually use it in a cluster environment.To cluster without a storage device you need to createanindependent persistent disk for each disk you want to use in a cluster environment.

For example, if I want to create a SQL Cluster I’ll need 2 disks, one for the quorom and one for the data.

The disk setup for the nodes will, in my case, be the following:

DiskSCSI IDSize
System0:0 15GB
Data1:050GB
Quorom2:0 2GB

Cause of the 15 SCSI ID controller limitation (0:0 till 0:14) you will need an extra
controller for each cluster disk(because of the 1:0 and 2:0 ID wich indicates it is
on a different controller). ESX will add these to your VM automatically.

Vmware windows failover cluster shared disk

Create the disks for the first cluster node VM

Capacity and location are depending on you’re own needs ofcourse

Warning: The ID’s you are using for the disks needs to be the same on all nodes that will use the disk

Check the disk settings after creation!

QUOROM

SQLDATA

To use the created disks on the cluster nodes(shared) you need to enable SCSI Bus Sharing
on the SCSI controller used by the disk. SCSI Controller 1 will be for 1:0 and SCSI Controller 2
for 2:0

To start the virtual machine with the shared disks another condition is that the disk is of the Thick type
and you need to outzero it, when not it will generate an error:
You cannot change this with the UI, it needs to be done from console or remote with something like putty!!

The command to change a disk would be the following:
vmkfstools -w [path to shared disk .vmdk]

So the command in my case would look like this:
vmkfstools -w /vmfs/volumes/4a79bdb4-35ab8aa9-73e2-0132107a3f1/CHARLIE/CHARLIE_3.vmdk

Cluster

This will result in the following:

And should be ending like this:

Check you disk settings to make sure everything is OK

If everything is OK follow the same procedure for each disk you want to use in a cluster.

After the configuration of the disks you’ll need to modify you’re node .vmx file to enable
disk sharing etc.

Vmware Windows Cluster Shared Disk Drive

You can download the file using the Datastore Browser an after modifying it upload it to the same
location.

The part for the cluster disks should look like the following, modify the .vmx to match
the picture below. (Apart from the .vmdk parts ofcourse)

Start the firt VM

When the disks are still not configured the way they should be the error will still ocure:

When the VM starts normally boot to Windows and open Disk Management and you will
see 2 unallocated disks.

Select the Initialize Disk option that will appear when you right click the disk.

Click on OK

When finished use the 2 disks to create 2 extended partitions

Use all available space when creating the partitions

After creating the 2 extended partitions continue with creating 2 logical drives out of
the extended partitions.

You should assign drive letters not used by other shares / drives on the network

For example:

Q:QUOROM

S:SQLDATA

Vmware windows cluster shared disk

Node one should look like this after assigning the drive letters:

After this we can create the disks for the second node(or all after that)Add the 2 existing disks created on Node 1 with the same ID’s used on Node 1

Vmware Windows Cluster Shared Disk Command

To make it easy(and recommended) you can copy the storage part out of the .VMX from Node 1 to the .VMX from Node 2

After starting the VM’s(Nodes) Manipulate file paths events will appear wich indicated the configuration is working!!

Vmware 6.5 Windows Cluster Shared Disk

After starting Node 2 open disk management and you will see the exact same disks(could be a difference in drive letters)

Vmware Windows Cluster Shared Disk Download

So now that the disks can be seen/used by both Nodes we can setup the cluster wich I will
do in a next post.

Vmware Windows Cluster Shared Disk

Keep in mind that Windows does not know how to manage/handle shared
disks without the Cluster Service so don’t think you can place a file on one disk from Node 1
and see it on Node 2

A VM shared disk on Microsoft Cluster Service (MSCS) is running out of disk space. The VMs are on a single host (aka cluster in a box - CIB). I can think of two ways to expand the disk storage.

  • create a new big shared disk for the cluster, migrate the data, then change the new disk to the same drive letter as the original disk
  • extend the size of the existing shared disk

Obviously the latter seems simpler, but it requires special attention. The shared disk format in MSCS VMs must be in eager zeroed thick format. However, when extending an eagerzeroedthick VMDK, the extended chuck is in lazy zeroed thick format by default (reference “Extending an EagerZeroedThick Disk”. In my test, vSphere 6 has the same behavior)

Here is how I extend the MSCS shared disk

  • Power off both servers in the cluster
  • Increase the VMDK disk size. There are two ways:
    • GUI: edit the VM settings, increase the shared disk size
    • CLI: use vmkfstools -X <newsize> -d eagerzeroedthick <vmdkfile>
      • For more info about vmkfstools, see my vmkfstools Examples post
  • Using the GUI, the extended chuck will be in lazy zero thick format. The VM will fail to power on with the error “VMware ESX cannot open the virtual disk for clustering…”
  • There are two ways to convert the extended chuck to eagerzeroedthick format
    • Migrate the VM to another storage, and specify the eager zero thick format for the disk
    • Use vmkfstools -k <vmdkfile>
  • Once the entire shared disk is the eager zeroed thick format, the VM will be able to power on.
  • Extend the Windows partition as KB304736