This guide provides a step guide for creating and configuring server clusters, helping you create and configure a typical single arbitration device multi-node server cluster using shared disks on servers running Microsoft Windows Server 2003 Enterprise Edition and Windows Server 2003 DataCenter Edition operating systems. .
Download this guide
This page
Introduction Server Cluster Configuration List: Cluster Installation Configuration Cluster Services Verification Cluster Installation Configuration Second Node Installation Configuration Test Installation Appendix Related Links
Introduction
The server cluster is a set of independent servers that work together and run Microsoft Cluster Service (MSCs). Server clusters provide high availability, fault recovery, scalability, and manageability for resources and applications.
The server cluster allows clients to access applications and resources when they are suspended in failures and programs. If a server in the cluster cannot be used due to failure or maintenance, resources and applications will be transferred to the available cluster nodes.
For the "Windows Cluster" solution, use "High Availability" This term is more appropriate than using "fault". Fault-tolerant technology provides higher levels of elasticity and recovery capabilities. Fault-tolerant servers typically use deep hardware redundancy, plus special software, almost any single hardware or software error can be recovered immediately. These solutions are much more expensive than the "Windows Cluster" solution, because companies must pay for redundant hardware that is waiting for errors in an idle state.
The server cluster cannot guarantee that there is no interrupt operation, but it is indeed sufficient availability to most critical task applications. Cluster services can monitor applications and resources and automatically identify and restore numerous faults. This provides flexibility to manage workloads in the cluster. In addition, the availability of the entire system is also improved.
The advantages of cluster services include:
High availability: Ownership of a server cluster (eg, disk drive and Internet Protocol (IP) address) is automatically transferred from the fault server to the available server. When a system or application in the cluster fails, the cluster software restarts the fault application on the available servers, or dispersion from the fault node to the remaining node. Therefore, the user will only feel the pause of the service in an instant. Fault Recovery: When the fault server returns to its predetermined preferred owner's online status, the cluster service will automatically reassign the workload in the cluster. This feature is configurable, but is disabled by default. Manageability: You can use the Cluster Manager tool (CLUADMIN.EXE) to manage the cluster as a single system and manage applications as a single server. You can transfer your application to other servers in the cluster. Cluster Manager can be used to manually balance the workload of the server and maintain the publishing server according to the plan. You can also monitor the status of clusters, all nodes, and resources from any location in the network. Scalability: The cluster service can be extended to meet the growth of demand. When the overall load of the application with a cluster consciousness exceeds the ability of the cluster, more nodes can be added.
This document provides a step-by-step operation guide for creating and configuring server clusters, which can help you create and configure server clusters on a server connected to a shared cluster storage device and running Windows Server 2003 Enterprise Edition or Windows Server 2003. The purpose of this document is to guide you to complete the steps to install a typical cluster, and does not explain how to install the cluster application. For the "Windows cluster" solution for implementing non-traditional arbitration models (such as the multi-node set [MNS] cluster and geographic dispersion) will not be discussed. For additional information about server clustering and installation and configuration steps, see "Windows Server 2003 Online Help".
Back to top
Server cluster configuration list:
This list can help you prepare the installation process. The list is later is a specific step guide.
Software requirements
All computers in the cluster are installed with Microsoft Windows Server 2003 Enterprise Edition or Windows Server 2003 DataCenter Edition. A name resolution method, such as: Domain Name System (DNS), DNS Dynamic Update Protocol, Windows Internet Name Service (WINS), HOSTS, and more. An existing domain model. All nodes must be a member of the same domain. A domain account must be a member of the local administrator group on each node. A special account is recommended. Hardware requirements
Cluster hardware must be found in the Cluster Service Hardware Compatibility List (HCL). To find the latest cluster service hardware compatibility list, visit the Windows Hardware Compatibility List at http://www.microsoft.com/whdc/hcl/default.mspx, then search for the cluster. The entire solution must get HCL certification, not just individual components. See the following article in Microsoft Knowledge Base: 309395 Microsoft Support policy for server clusters and hardware please note: If you are installing the cluster on the storage area network (SAN), and plan to make multiple devices and The cluster shares SAN with the cluster, then the solution must also obey the "Group / Multi-Cluster Equipment Hardware Compatibility List". For additional information, see the following article in "Microsoft Knowledge Base": 304415 Supports two massive storage device controllers for multiple sets attached to the same SAN device: Small Computer System Interface (SCSI) or Fiber Channel. A local system disk for installing an operating system (OS) on one of the domain controllers. A Independent Peripheral Component Interconnect (PCI) storage controller for shared disks. Each node in the cluster has two PCI network adapters. Attach the shared storage device to the storage cable of all computers. See the manufacturer's instructions documentation to understand the configuration information of the storage device. See the appendices later in this article to learn about the specific configuration required to use SCSI or "Fiber Channel". For all nodes, all hardware should be the same, corresponding to the correct slot, device card, BIOS, firmware revision, etc. This will make the configuration easier while eliminating compatibility issues.
Network requirements
A unique NetBIOS name. All network interfaces on each node have a static IP address. Note: "Server Cluster" does not support the use of addresses allocated by the Dynamic Host Configuration Protocol (DHCP) server. Access a domain controller. If the cluster service cannot verify a user account for starting the service, it may cause a cluster to malfunction. It is recommended that you install a domain controller on the same LAN (LAN) in the cluster to ensure its availability. Each node is at least two network adapters: a dedicated cluster network for connecting to the client utility and the other for connecting nodes. HCL certification requires a dedicated network adapter. All nodes must have two physically independent local area networks or virtual LANs for public communication and private communication, respectively. If you are using a fault-tolerant network or network adapter combination, confirm that you are using the latest firmware and drivers. Verify the cluster compatibility to your network adapter manufacturer.
Shared disk requirements:
A HCL approved, connected to an external disk storage unit of all computers. This storage unit will be used as a cluster sharing disk. It is recommended to use some type of hardware independent disk redundant array (RAID). All shared disks, including arbitration disks, must actually be attached to a shared bus. Note: The above requirements do not apply to the Most Node Sets (MNS) clusters, and this guide does not introduce such clusters. The controller where the shared disk is located must be different from the controller used by the system disk. It is recommended to create multiple hardware level logical drives in the RAID configuration instead of using a single logical disk, then divide them into multiple operating system level partitions. This is different from the configuration used by the independent server. However, it allows you to have multiple disk resources in the cluster, and can also perform "active / active" configuration and manual load balancing across node. A minimum 50 megabyte (MB) dedicated disk, used as an arbitration device. In order to get the best NTFS file system performance, it is recommended to use a minimum 500 MB of disk partition. Confirm that the disk attached to the shared bus can be seen from all the nodes. This can be checked in the master adapter setting level. See the manufacturer's instructions documentation to understand the guidance instructions specific to the adapter. The unique SCSI identification number must be assigned to the SCSI device according to the manufacturer's instructions, and it is correctly terminated. See the appendix of this article to learn about the information on installing and terminating SCSI devices. All shared disks must be configured as basic disks. For other related information, see the following article in Microsoft Knowledge Base: 237853 Server Cluster Disk Resources Unavailable Dynamic Disk Configuration Cluster Shared Disk itself does not support software fault tolerance. On systems running 64-bit version of Windows Server 2003, all shared disks must be configured as a primary boot record (MBR) disk. All partitions on the cluster disk must be formatted to NTFS. It is recommended that all disks are all configured with hardware fault tolerance RAID. It is recommended to use two logical shared drives. Back to top
Cluster installation
Installation overview
When you install some nodes during installation, you will close other nodes. This step helps to ensure that data attached to the disk of the shared bus will not be lost or destroyed. When multiple nodes attempts to write a disk that is not protected by the cluster software, data is lost or destroyed. Unlike Microsoft Windows 2000 systems, Windows 2003 Server's default load mode has changed. In Windows 2003, the system does not automatically load logical disks that boot partitions are not in the same bus, nor does it assign drive. This helps ensure that the server does not load drives that may belong to another server in a complex SAN environment. Although the server does not automatically load the drive, you still recommend that follow these steps to ensure that the shared disk will not be damaged.
Use the table below to determine the nodes and storage devices that must be turned off at each step.
The steps in this guide apply to a two-node cluster. If you are installation of more than two clusters, the Node 2 column will list the status required for all other nodes.
Step Node 1 Node 2 Storage Backup Note Setting the Network On Turn on Close Confirm that all storage devices on the shared bus are closed. Turn on all nodes. Set the shared disk to turn off all nodes in turn on. Turn on the shared storage and turn on the first node. Verify that the disk configuration is turned off to turn on the first node to turn on the second node. If desired, the same steps can be repeated for the third and fourth nodes. Configure the first node to turn off all nodes in turn on; turn on the first node. Configure the second node to turn on the second node after the first node is turned on. If desired, the same steps can be repeated for the third and fourth nodes. Turn on the opening and opening all nodes after installation.
Before configuring the "Cluster" service software, you must first implement a few steps. These steps are:
Install Windows Server 2003 Enterprise Edition or Windows Server 2003 DataCenter Edition operating system on each node. Set the network. Set the disk. Before you start installing a cluster service on the first node, you must first perform these steps on each cluster node.
To configure a cluster service, you must log in with an account with all node administrative privileges. Each node must be a member of the same domain. If you choose one of the nodes as a domain controller, you should set up a domain controller on the same subnet to eliminate a single point of failure and maintain the node.
Install Windows Server 2003 operating system
See the document you have obtained from your Windows Server 2003 operating system package to install the system on each node of the cluster.
You must log in locally before you use a local administrator group member to log in locally.
Please note: If you try to add a node to a cluster for a local administrator account password, the installation will fail. For security reasons, Windows Server 2003 prohibits the use of blank administrator passwords.
Set network
Each cluster node requires at least two network adapters for two or more independent networks to avoid single-point failure. One of the network adapters is used to connect to the public network, and the other is used to connect to a private network consisting only of the cluster node. Servers with multiple network adapters are called "multi-host". Since multiple host servers are prone to problems, please do your best to follow the network configuration recommendations described in this document.
Microsoft requires you to have two "Hardware Compatibility List" (HCL) authentication and "Microsoft Product Support Services" supported on each node and "Microsoft Product Support Services" network adapter. Configuring one of the network adapters on your production network, and configures another network adapter on a separate network on a subnet for conducting a dedicated cluster communication. .
Communication between server cluster nodes is critical to the smooth operation of clusters. Therefore, you must perform the best configuration for the network used for cluster communication, and follow all requirements on the hardware compatibility list.
A private network adapter is used for nodes to communicate, cluster status information, and cluster management of nodes. The public network adapter of each node connects the cluster to the public network where the client is located, and should be configured as a reserve route for internal cluster communications. To implement these operations, these network characters of the cluster service should be configured as "for internal cluster communication" or "all communications".
In addition, each cluster network must be independent, if a network fails, it will not affect other networks. This means that the two cluster networks cannot have public components that may cause both faults. For example, when using a multi-port network adapter to attach nodes to two cluster networks, most of them cannot meet this requirement because the port is not independent.
To eliminate possible traffic issues, all unnecessary network communications should be removed from the network adapter set to communicate only for internal clusters (this adapter is also called a core or private network adapter).
To verify that all network connections are accurate, the dedicated network adapter must be in another logical network from the public network adapter. This can be implemented by using a cross-connect cable in a two-node configuration or using a special dummy outler in a multi-node configuration. Do not use switches, smart hubs, or any other route devices for core networks.
Note: Cluster heartbeats cannot be passed through routing devices because their "survival time" (TTL) is set to 1. The public network adapter can only connect to the public network. If you have a virtual local area network (LAN), the wait time between the nodes must be less than 500 milliseconds (MS). In addition, in Windows Server 2003, the heartbeat in the Server Cluster is changed to multicast; so you may need to provide a MADCAP server to assign a multicast address. Other related information, see the following article in Microsoft Knowledge Base: 307962 To enable multicast support for group set, enabling Multicast Support The following is a brief drawing of a four-node cluster configuration.
Figure 1: Connection of the four-node cluster. View larger image.
General network configuration:
Please note: This guide assumes that you are running the default start menu. If you use a traditional start menu, the specific step may be slightly different. Similarly, which network is dedicated to which network is to be fixed according to your wiring. In this white paper, the first network adapter (local connection) is connected to the public network, and the second network adapter (local connection 2) is connected to a dedicated cluster network. Your network may vary.
Rename LAN icon
It is recommended that you change the name of the network connection to clearly identify. For example: You may want to change the name of the local connection 2 to Private. Renaming will help you identify your network and correctly assign roles.
1. Click Start, point to Control Panel, right-click on the network connection, and then click Open 2. Right-click the local connection 2 icon. 3. Click Renaming. 4. Type private in the text box, then press Enter. 5. Repeat steps 1 through 3 and then rename the public network adapter as public.
Figure 2: Rentered icon in the Network Connection window.
6. Renamed icon As shown in Figure 2 above. Close the Network Connection window. The new connection name will appear in Cluster Manager and automatically copy it to all other cluster nodes when online.
Configure the binding order network on all nodes
1. Click Start, point to Control Panel, right-click on the network connection, and then click Open. 2. On the Advanced menu, click Advanced Settings. 3. In the connection box, confirm that your binding order is as follows, and then click OK:
1. Public Network 2. Dedicated Network 3. Remote Access Connection
Configure a private network adapter
1. Right click on the network connection corresponding to your heartbeat adapter, and then click Properties. 2. On the General tab, confirm that only the Internet Protocol (TCP / IP) check box is selected, as shown in Figure 3 below. Click to clear all other clients, services, and protocol checkboins.
Figure 3: In the Private Network Properties dialog box, only click the "Internet Protocol" checkbox.
3. If you have a network adapter that can be transferred at different speeds, you should manually specify a speed and duplex mode. Do not automatically select settings for the transfer speed application, because some adapters may drop some packets at a specific speed. The speed hard setting (manual setting) of the network adapter on all nodes must be based on the specifications of the device card manufacturer. If you cannot determine the speed supported by your device card and connection device, Microsoft recommends that all devices on the same path are set to 10 megabytes per second (Mbps) and half-duplex, as shown in Figure 4 below. . Although the amount of information through the heartbeat network is small, the waiting time is critical to communication. This configuration provides sufficient bandwidth for reliable communication. All network adapters attached to the same network in the group must be unified to be set to use the same duplex mode, connection speed, stream control, and more. Contact your Adapter Manufacturer to learn about the correct speed and duplex settings of your network adapter.
Figure 4: Setting speed and duplex for all adapters.
Please note: Microsoft recommends that you don't want to use any type of fault tolerant adapter or combination to your heartbeat. If you need a redundant heartbeat connection, you can use multiple network adapters that are set to only for internal communication and define their network priority in the Cluster configuration. Early multi-port network adapters often have problems if you use this technology, confirm that your firmware and drivers belong to the latest version. Contact your network adapter manufacturer to learn about compatibility on server clusters. For other related information, see the following article in Microsoft Knowledge Base: 254101 Network Adapter Combination with Server Clusters 4. Click Internet Protocol (TCP / IP), and then click Properties. 5. On the General tab, verify that the static IP address you have either is not in the same subnet or network. It is recommended that you place a dedicated network adapter in any of the following special network address domains: 10.0.0.0 to 10.255.255.255 (Class A) 172.16.0.0 to 172.31.255.0.0 to 192.168.255.255 (Class C) For example, The following IP address can be used for the dedicated adapter: set the address of the node 1 to 10.10.10.10, set the address of the node 2 to 10.10.10.11, and set the subnet mask 255.0.0.0, as shown below Indicated. Confirm that the IP address scheme is completely different from the IP address scheme applied to the public network. Please note: For additional information about the valid IP addressing of the private network, see the following article in Microsoft Knowledge Base: 142863 Valid IP addressing of the private network
Figure 5: Example of an IP address suitable for use in a dedicated adapter.
6. Verify that no value is defined under the default gateway box or using the following DNS server address. 7. Click the Advanced button. 8. On the DNS tab, make sure no values are defined. At the same time, make sure the addresses for registering this connection in the DNS and the DNS suffix check box using this connection in DNS registration. 9. On the WINS tab, confirm that no value is not defined. Click NetBIOS on TCP / IP, as shown in Figure 6 below.
Figure 6: Verify that any value is not defined on the WINS tab.
10. When you turn off the dialog, you may receive the following tips: This connection has a blank primary WINS address. Do you want to continue? If you receive this prompt, click Yes 11. On all other nodes in the cluster, steps 1 to 10 are completed by different static IP addresses.
Configure public network adapter
Note: If the IP address is obtained through DHCP, then if you cannot access the DHCP server, you may not be able to access the cluster node. For this reason, all interfaces on the server cluster require a static IP address. Keep in mind that the cluster service can only identify a network interface on each subnet. If you need TCP / IP addressing assistance in Windows Server 2003, see "Online Help".
Verify connectivity and name analysis
To verify that the dedicated and public networks can communicate correctly, please pay all the IP addresses every node. You should be able to ping all IP addresses on your local and remote nodes.
To verify the name resolution, from the machine name (non-IP address) of each node. You should only return the IP address of the public network. You may also try to query the IP address in reverse through the ping a command.
Verification domain member
All nodes in the cluster must be a member of the same domain, and can access domain controllers and DNS servers. It can be configured as a member server or domain controller. Like a cluster, you should at least have a domain controller in the same network area. Based on high availability. Another domain controller should also be used to eliminate single point failures. In this guide, all nodes are set as a member server. In some instances, nodes may deploy in an environment where the Microsoft Windows NT 4.0 domain controller or Windows Server 2003 domain controller is not pre-configured. In this case, at least one of the cluster nodes are set to the domain controller. However, in the dual node server cluster, if one of the nodes is a domain controller, then another node must also be a domain controller. In the four-node cluster implementation, there is no need to set all four nodes to the domain controller. However, when followed by a "best practice" model and at least one backup domain controller, there should be at least one of the remaining three nodes should be set to the domain controller. Before setting the cluster service, you must use the DCPROMO tool to upgrade one of the nodes to a domain controller.
If another DNS server that supports dynamic updates and / or SRV records, the DNS related settings in Windows Server 2003 also require each domain controller node to be the DNS server at the same time (it is recommended to use the Active Directory integration area).
When deploying a cluster node as a domain controller, you should consider the following questions:
If there is a cluster node in the two-node cluster is a domain controller, another node must also be a domain controller run domain controller requires a certain overhead. A idle domain controller typically uses 130 to 140 MB of RAM, including the memory required to maintain the cluster service run. Copying will also increase network traffic because these domain controllers must be copied together with other domain controllers in the domain. If the cluster node is a unique domain controller, each node must be a DNS server at the same time. They should point each other to each other for the primary DNS parsing, and point to them themselves for assistance. The first domain controller in the forest / domain will assume all "Operational Master Roles". You can reassign these characters to any node. However, if there is a node that is faulty, the "operation master role" assumed by the node will no longer be available. Therefore, it is recommended that you run "Operation Master Roles" on any cluster node. These include "architecture master", "domain naming master", "related id master", "PDC simulation master", and "infrastructure master". These functions cannot be clustered to provide high availability and failover capabilities. Due to resource limits, in the case where the node is also a domain controller, a cluster process of other applications such as Microsoft SQL Server or Microsoft Exchange Server may not be able to achieve the best results. Before deployment, this configuration must be a comprehensive test in a lab environment.
Since all the nodes should be used as a member server.
Set a cluster user account
The cluster service requires a domain user account that should be a member of the Local Administrator group for each node that can run the cluster service. Because the installation requires the username and password, the user account must be created before configuring the cluster service. The user account can only be specifically used to run cluster services without having to belong to individuals.
Note: The Cluster Service Account does not have to be a member of the Domain Administrator group. Based on safety reasons, it is recommended not to grant the right to cluster service account domain administrators.
The cluster service account requires the following rights to work properly on all nodes of the cluster. "Cluster Configuration Wizard" automatically grants the following rights:
Er as part of the operating system Adjust the memory quota backup file and directory add plan priority as a service login to restore files and directories
For additional information, see the following article in Microsoft Knowledge Base:
269229 How to manually recreate a cluster service account
Set a cluster user account
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active Directory Users and Computers. 2. If the domain has not been expanded, click the plus sign ( ) to expand it. 3. Right-click the user, point to New, and then click User. 4. Type the cluster name, as shown in Figure 7 below, and then click Next. Figure 7: Type the cluster name.
5. Set the password setting to the user cannot change the password and password never expire. Click Next, then click Finish to create the user. Note: If your management security policy does not allow the use of never expired passwords, you must have a password on each node before password expire and update the cluster service configuration. For other related information, see the following article in "Microsoft Knowledge Base": 305813 How to change the cluster service account password 6. In the left pane of the Active Directory User and Computer Management Unit, right-click the cluster, then at the shortcut menu Click on the attribute. 7. Click to add members to the group. 8. Click Administrator, and then click OK. This will give the new user account management privilege on the computer. 9. Exit the "Active Directory User and Computer" management unit.
Set shared disk
WARNING: To avoid damage to the cluster disk, check the Windows Server 2003 and Cluster Services at least on one node on one node on one node on a node. Before completing the cluster service configuration, the number of nodes opened should not exceed one, this is critical.
Turn off all nodes before proceeding. Turn on the shared storage device and turn on Node 1.
About arbitration disk
Arbitration Disks are used to store cluster configuration database checkpoints and log files, and log files help manage clustering and maintenance consistency. It is recommended that you set up the following arbitration disk steps:
Create a minimum of 50 MB logical drives for arbitration disks, for NTFS, optimal size is 500 MB. A separate disk is specifically scored as an arbitration resource.
Important: The arbitration disk failure may result in failure of the entire cluster; so it is highly recommended that you use a volume on the hardware RAID array. Do not perform other tasks in addition to cluster management.
Arbitration resources play a key role in cluster operations. In each cluster, there is a single resource being assigned as an arbitration resource. Arbitration resources can be any "physical disk" resource with the following functions:
Copy the cluster registry to other nodes in the server cluster. By default, the cluster registry stores the following locations on each node:% systemroot% / cluster / clusdb. Then, the cluster registry is copied into the MSCS / CHKXXX.TMP file of the arbitration drive. These files are copies with each other. The MSCS / Quolog.log file is a transaction log for maintaining all changes records executed by the checkpoint file. This means that offline nodes can append these changes when rejoining clusters. If the communication is lost between the cluster node, the challenge response protocol will be launched to prevent the occurrence of "separation" phenomenon. In this case, the owner of the arbitral disk resource will become the only all of the cluster and all resources. The owner provides resources to the client. When the node having an arbitration disk is not operating properly, other normal nodes will obtain the ownership of the device through the arbitration. For other related information, see the following article in Microsoft Knowledge Base: 309186 Cluster Services How to get the ownership of the disk on the shared bus
In the cluster service installation process, you must provide the drive letter for the arbitration disk. The commonly used standard drive letter is Q, and Q-to-drive characters are also used in this example.
Configuring shared disks
Click
1. Confirm that only one node is only opened. 2. Right-click on my computer, click Manage, and then expand storage. 3. Double-click Disk Management. 4. If you connect a new drive, you will automatically turn on the Write Signature and Update Disk Wizard. If this wizard appears, click Next, complete the various steps of the wizard. Note: This wizard automatically sets the disk to dynamic. To reset the disk to the basic state, right click on the disk n (n Here you are setting the disk), then click Restore Basic Disk. 5. Right-click the unallocated disk space. 7. New partition. 8. Start the New Partition Wizard. Click Next. 9. Select the partition type of the primary partition. Click Next. 10. By default, the partition size is maximized. Click Next. (It is recommended to use multiple logical disks, rather than multiple partitions on one disk.) 11. Change the drive letter using the drop-down box. Use the alphabetically reserved drive from the alphabet than the default. Typically, the drive tray Q can be used for arbitration disks, while letters such as R, S are used for data disks. For other related information, see the following article in Microsoft Knowledge Base: 318534 Best Practice for Assigning Drive Drive Features on Server Clusters Note: If you are planning to use the volume-loaded point, do not assign drive panels to the disk. For additional information, see the following article in Microsoft Knowledge Base: 280297 How to configure the volume load on the cluster server 12. Format the partition using NTFS. In the volume box, type the name of the disk. For example, the driver Q, as shown in FIG. 8 below. The distribution of the drive loops for shared disks is critical because this greatly reduces troubleshooting time when restoring disks. Figure 8: Volumely assigning the drive to the shared disk. View larger image.
If you are installing 64-bit version of Windows Server 2003, confirm that all disks are formatted to MBR. Does not support the "GPT" (GPT) as a cluster disk. For other related information, see the following article in Microsoft Knowledge Base:
284134 Server cluster does not support GPT shared disks
Confirm that all shared disks are formatted to NTFS and specify as MBR Basic.
Verify Disk Access and Features
1. Start Windows Explorer. 2. Right-click on a shared disk (for example, drive Q: /), click New, and then click Text Document. 3. Confirm that you can write a disk and create a file. 4. Select the created file, then press the DEL key to remove it from the cluster disk. 5. Repeat steps 1 through 4 for all cluster disks, confirm that they can be properly accessed from the first node. 6. Turn off the first node to turn on the second node and repeat steps 1 through 4, verify disk access and functionality. Assign the drive tray matching with the drive roll. Repeat the same steps for any other node. Confirm that all nodes can read and write from disk, turn off all nodes other than the first node, and continue this white paper.
Back to top
Configure cluster services
You must provide all initial configuration information at the first phase of the installation. This step can be done through the Cluster Configuration Wizard.
As shown in the following flowchart, the path taken by Form (Create a new cluster) and the JOIN (add node) are different, but they have some of the same settings page. That is, "Voucher Login", "Analysis", and "Rean Analysis and Start Service". There is a little different from the following pages: "Welcome", "Select Computer" and "Cluster Service Account". In the following two parts below this lesson, you will finish the wizard page that reflects all of these configuration paths. When you have finished all steps, this white paper will describe "Analysis" and "Rean Analysis and Start Service" page in a third part, and the specific meaning of the information provided by these screens.
Note: During the cluster service configuration on node 1, you must close all other nodes. Turn on all shared storage devices.
Configure the first node
1. Click Start, All Programs and Administration Tools, and then click the Cluster Manager. 2. When the "Open to Cluster Connection" prompt appears, in the operation drop-down list, click Create a new cluster, as shown in Figure 9 below.
Figure 9: "Operation" drop-down list.
3. Confirm that you have the prerequisites necessary to configure the cluster, as shown in Figure 10 below. Click Next.
Figure 10: Prerequisites List are part of the "Welcome New Server Cluster Wizard" page. View larger image.
4. Type the unique NetBIOS name (up to 15 characters) of the cluster, and then click Next. In the example shown in Figure 11 below, the cluster is named MyCluster. It is recommended to follow DNS naming rules. For other related information, see the following article in Microsoft Knowledge Base: 163409 NetBIOS suffix (the 16th character of the NetBIOS name) 254680 DNS name spatial planning
Figure 11: It is recommended to follow the DNS naming rules when naming the cluster. View larger image.
5. If you use an account log in locally that you don't belong to a local administrative privilege, the wizard will prompt you to specify an account. This is not the account used to start the cluster service. Note: If you have the correct credential, step 5 mentioned, as shown in Figure 12, may not appear.
Figure 12: New Server Cluster Wizard prompts you to specify an account. View larger image.
6. Since the cluster may be remotely configured, you must confirm or type the name of the server to create a cluster as the first node, as shown in Figure 13 below. Click Next.
Figure 13: Select the computer name that will be the first node in the cluster. View larger image.
Note: The Installation Wizard can verify that all nodes can see shared disks. In a complex storage zone network, the target identifier (TID) of the disk may sometimes be different, and the "Install" program may incorrectly detect the disk configuration for "installation" invalid. To solve this problem, you can click the Advanced button and click Advanced (Minimum) Configuration. For other related information, see the following article in Microsoft Knowledge Base: 331801 The cluster installer may not work when you add a node.
View larger image.
7. Figure 14 below shows that the "Installation" program is analyzing nodes and finds hardware or software issues that may result in problems. Check all warnings or error messages. You can also click the Details button to learn more about each warning or prompt.
Figure 14: The "Installation" program is analyzing nodes to find hardware or software issues that may exist. View larger image.
8. Type the unique cluster IP address (in this example: 172.26.204.10), then click Next. As shown in Figure 15 below, the New Server Cluster Wizard selects the correct network by using the subnet mask, automatically associates the cluster IP address with one of the public networks. Cluster IP addresses can only be managed without being used for client connections.
Figure 15: New Server Cluster Wizard automatically automatically associates the cluster IP address with one of the public networks. View larger image.
9. Type the username and password of the cluster service account created during pre-installation. (In the example shown in Figure 16, the username is: cluster). Select the domain name in the domain drop-down list, and then click Next. At this point, the Cluster Configuration Wizard will verify the user account and password.
Figure 16: Wizard prompts you to provide an account created during pre-installation. View larger image.
10. Check the summary page, as shown in Figure 17 below, confirm that all information that will be used to create clusters is accurate. If you need, you can use the arbitration button to change the arbitral disk specified by the default automatically selected disk. The summary information displayed on this screen can be used to reconfigure the cluster when there is a disaster recovery status. It is recommended that you save and print a hard copy and keep the management log on the server. Note: The arbitration button can also be used to specify a multi-node set (MNS) arbitration model. This is one of the main configuration differences when you create a MNS cluster. Figure 17: The Target Cluster Configuration page. View larger image.
11. Check all warnings or errors encountered during the cluster creation. The specific operation is to click the plus sign to view more information, then click Next. The warnings and errors that appear in the Create Cluster page are shown in Figure 18.
Figure 18: A warning and error appearing on the Create Cluster page. View larger image.
12. Click Finish to end the installation. Figure 19 below shows the final step.
Figure 19: Set the final step of the new server cluster. View larger image.
Note: To view the detailed summary, click View Log Button, or view the text files stored in the following locations:% systemroot% / system32 / logfiles / cluster / clcfgsrv.log.
Back to top
Verify the cluster installation
Use the Cluster Manager (Cluadmin.exe) to verify the cluster service installation on Node 1.
Verify the cluster installation
1. Click Start, All Programs and Administration Tools, and then click the Cluster Manager. 2. Confirm that all resources have successfully implemented online, as shown in Figure 20 below.
Figure 20: "Cluster Manager" verifies that all resources are smoothly implemented. View larger image.
Note: In principle, do not put anything in the cluster group, do not remove anything from the cluster group, do not use anything other than the cluster management in the cluster group.
Back to top
Configure the second node
Installing the cluster service on the second node is less than the time required to install on the first node. The "Install" program is based on the configuration of the first node, configuring the cluster service network settings on the second node. At the same time, you can also add multiple nodes to the cluster via remote operation.
Note: In this part of the exercise, open node 1 and all shared disks. Then open all other nodes. At this time, the cluster service will control access to the shared disk to eliminate any opportunities that may destroy the volume.
1. Open the cluster manager on node 1. 2. Click File, click New, and then click the node. 3. Start "Add Cluster Computer Wizard". Click Next. 4. If you do not log in with the correct credential, you will ask you to specify a domain account for all nodes in the cluster. 5. Enter the machine name that you want to add it to the cluster. Click Add. This step is repeated, as shown in Figure 21 below, add other nodes you want. When all the nodes are added, click Next.
Figure 21: Add a node to the cluster. View larger image.
6. The Installation Wizard will perform an analysis of all nodes to confirm that they have been properly configured. 7. Type the account password used to start the cluster service. 8. Check the summary information displayed to confirm its accuracy. This summary information will be configured for these nodes when other nodes are added to the cluster. 9. Check all warnings or errors encountered during the cluster creation, then click Next. 10. Click Finish to end the installation.
Back to top
Post-installation configuration
Heartbeat configuration
Now, the network on each node has been properly configured, and the cluster service has been configured, then you need to configure the network role to define its function in the cluster. Here is a list of network configuration options in the Cluster Manager:
Enabled for cluster applications: If this check box is selected, the cluster service will use the network. This check box is selected for all networks by default. Only for client access (public networks): If you want the cluster service to communicate with other clients with other clients, you can select this option. The network adapter will not communicate with nodes. Only for internal cluster communication (private network): This option can be selected if you want the cluster to use the network to communicate with the node. All communication (Mixing Networks): This option can be selected if you want the cluster service to communicate node communication and external clients using the network adapter. This option is selected for all networks by default. This white paper is assumed to use only two networks. It explains how these two networks are configured as a hybrid network and a private network, respectively. This is the most common configuration. If you have available resources, it is recommended to use two dedicated redundant networks for internal cluster communications.
Configure heartbeat
1. Start "Cluster Manager". 2. In the left pane, click the Cluster Configuration, click Network, right-click dedicated, and then click Properties. 3. Click on for internal cluster communication (dedicated network), as shown in Figure 22 below.
Figure 22: Configure heartbeat using the Cluster Manager.
4. Click OK. 5. Right-click public and click Properties (as shown in Figure 23 below). 6. Click Selected as Cluster App to Enable this Network check box. 7. Click All Communications (Mixing Networks), then click OK.
Figure 23: The Public Properties dialog.
Heartbeat adapter prioritization
After configuring how the cluster service applies the role of the network adapter, the next step is to give priority to internal cluster communication applications. This is only suitable for two or more networks configured as a node to node communication. The priority arrow on the right side of the screen specifies the order of the cluster service to communicate between nodes between nodes. Cluster services always try to use remote procedure calls (RPC) communication between nodes using the first bit network adapter. The next network adapter on the list is only used when the cluster service cannot communicate using the first network adapter.
1. Start "Cluster Manager". 2. In the left pane, right-click the cluster name (located in the upper left corner), then click Properties. 3. Click the Network Priority tab as shown in Figure 24 below.
Figure 24: Network Priority tab in Cluster Manager.
4. Confirm that the dedicated network is listed at the top. Use the upright or down button to change the priority. 5. Click OK.
Configure cluster disks
Start Cluster Manager, right-click any disk you want to delete from the cluster, and then click Remove.
Note: In the default, all disks are not on the same bus, because the system disk will create "physical disk resources" for them and cluster. Therefore, if the node has a plurality of bus, some of the disks that may be listed will not be used as shared storage, for example, an internal SCSI drive. These disks should be removed from the cluster configuration. If you plan to implement a "volume load" point for some disks, you may want to delete the current disk resource of these disks, delete the drive letter, and create a new disk resource that is unassigned drive drive.
Arbitration disk configuration
The Cluster Configuration Wizard automatically selects the drive that will be used as an arbitration device. And will use a minimum partition of more than 50 MB. You may want to change the auto-selected disk to your own arbitration disk.
Configure arbitral disks
1. Start Cluster Manager (CLUADMIN.exe). 2. Right-click the cluster name located in the upper left corner and click Properties. 3. Click the Arbitration tab. 4. In the Arbitration Resource list box, select a different disk resource. In Figure 25 below, the arbitration resource list box is selected "Disk Q".
Figure 25: The "Arbitration Resource" list box.
5. If there is more than one partition with the disk, click the partition you want to store the cluster, and then click OK.
For other related information, see the following article in Microsoft Knowledge Base:
280353 How to change the arbitration disk Specify a delay start
The cluster service may not be activated when all cluster nodes start and attempt to attach to arbitration resources. This happens, for example, when a power failure occurs, while recovering power to all nodes, this is possible. To avoid such circumstances, you can increase or reduce the time settings for displaying the list of operating systems. To find this setting, click Start, point to My Computer and right click on my computer, and then click Properties. Click the Advanced tab, and then in the Startup and Troubleshooting box, click Settings.
Back to top
Test installation
After the "Install" program is over, there are several ways to verify the cluster service installation. These include:
Cluster Manager: If only the installation of node 1 is completed, start "Cluster Manager" and then try to connect to the cluster. If the second node 2 has been installed, "Cluster Manager" is launched on any of the nodes, and then confirm that the second cluster is displayed on the list. "Services" applet: Use the service management unit to confirm that the cluster service has been displayed on the list and has started. Event Log: Use the Event Viewer to check the ClusSvc entry in the system log. You will see an entry for confirming that the cluster service has successfully formed or joined a cluster. Cluster Service Registry NE: Confirm the Cluster Service Setup has written the correct item to the registry. You can find a lot of registry settings in HKEY_LOCAL_MACHINE / Cluster, click Run, and type the name of the Virtual Server. Confirm that you can connect and see resources.
Test failover
Verifying resources can perform failover
1. Click Start, Programs, and Administrative Tools, then click the Cluster Manager, as shown in Figure 26 below.
Figure 26: Cluster Manager window.
2. Right-click on the disk group and click the mobile group. The group and all its resources will be transferred to another node. Later, disk f: g: Online will be implemented on the second node. This transfer is observed in the window. Exit "Cluster Manager".
Congratulations! You have completed the cluster service configuration on all nodes. Server clusters have been functioning properly. You can now install cluster resources, such as file sharing, printer background handroots, such as distributed transaction coordinators, DHCP, WINS, etc., or cluster-sensitive programs such as Exchange Server or SQL Server.
Back to top
appendix
Advanced test
Now, you have configured your cluster and verify that basic features and failover, you might want to perform a range of fault scenarios to demonstrate the expected results and ensure that the cluster will respond correctly when the cluster will respond correctly. This level of test is not required at each implementation, but if you start touching cluster technology, it is not familiar with the cluster's response, or you are implementing a new hardware platform in your own environment, this will Help in depth understanding of cluster management. The listed expected results are for a clean cluster configuration that combines the default settings, and no user-defined failover logic is considered. This is not a complete list of all tests, and successfully completing these tests and does not think that "licenses" or ready for production operations. This is just a list of demonstrations about certain executable tests. For other related information, see the following article in Microsoft Knowledge Base:
197047 Failover / Fault Recovery Strategy on Microsoft Cluster Server
Test: Start "Cluster Manager", right click on a resource, and then click Start Fault. This resource will enter the fault state, then restart and return to the online status on the same node.
Expected results: Resources will return online status on the same node
Test: On the same resource, execute three of the above startup fault tests. In the fourth failure, resources will perform failover to another node in the cluster.
Expected Result: Resource Implementing Fail Transfer Test for another node in the cluster: Transfer all resources to a node. Start "Computer Management" and click Services below the service and applications. Stop cluster services. Start "Cluster Manager" on another node, confirm that all resources correctly perform failover and return online status on that node.
Expected Result: Resource will implement fault transfer for another node in the cluster
Test: Transfer all resources to a node. On this node, click Start, and then click Close. This will turn off the node. Start "Cluster Manager" on another node, then confirm that all resources correctly perform failover and return online status on that node.
Expected Result: Resource will implement fault transfer for another node in the cluster
Test: Transfer all resources to a node, then press the power button in front of the server to turn it off. If you have a server that meets the ACPI (Advanced Configuration and Power Interface) standard, the server will perform the "emergency shutdown" function to close. Start "Cluster Manager" on another node, confirm that all resources correctly perform failover and return online status on that node. For additional information about "Emergency Closure", see the following articles in Microsoft Knowledge Base:
325343 Operation Guide: Executing emergency shutdown in Windows Server 2003
297150 Power button on the ACPI computer may enforce urgent shutdown
Expected Result: Resource will implement fault transfer for another node in the cluster
WARNING: Perform "emergency shutdown" tests may cause data to be damaged and lost. Do not perform this test on the production server
Test: Transfer all resources to a node, then unplug the server's power cord mimics a hard fault. Start "Cluster Manager" on another node, then confirm that all resources correctly perform fault transfer and return online status on that node
Expected Result: Resource will implement fault transfer for another node in the cluster
Warning: Perform a hard fault test test may result in damage and loss of data. This is an extreme test. Confirm that you have backed up all critical data and then execute the test on your own disk. Do not perform this test on the production server
Test: Transfer all resources to a node and unplug the public network cable of the node. The IP address resource will be invalid, and the group will perform failover in another node in the cluster. For other related information, see the following article in Microsoft Knowledge Base:
286342 Network fault detection and recovery in Windows Server 2003 cluster
Expected Result: Resource will implement fault transfer for another node in the cluster
Test: Unplug the network cable of a dedicated heartbeat network. The heartbeat flow will perform fault transfer for public networks, and do not perform other failover. If other failover occurs, see the section on "Configuring Private Network Adapters" in front of this document.
Expected results: No other fault transfer or resource failover
SCSI drive installation
This appendix provides a general guiding instructions for mounting SCSI drive installations. If the Guidance of the SCSI hard disk vendor conflicts with the instructions here, please follow the guidelines provided by the supplier.
The SCSI bus listed in the hardware requirement must be configured before installing the cluster server. Configuration involves:
SCSI device. SCSI controllers and hard disks ensure that you can run properly on a shared SCSI bus. Correctly tap bus. The shared SCSI bus must have a terminal on each terminal of the bus. Each node of the cluster may have multiple shared SCSI buss.
In addition to the following information, see your SCSI device manufacturer's documentation, or you can request a SCSI specification to the US National Standards Association (ANSI). The ANSI website contains a directory that can be used to search for the SCSI specification.
Configure SCSI devices
Every device on the shared SCSI bus must have a unique SCSI identification number (ID). Since most SCSI control devices are default SCSI ID 7, the configuration shared SCSI bus includes changing the SCSI ID number on the controller to another number, such as: SCSI ID 6. If the disk on the shared SCSI bus is not only one, each disk must have a unique SCSI ID number. End to share SCSI bus
There are several ways to connect the SCSI bus, including:
The SCSI controller SCSI controller has internal soft ends that can be used for end bus, but it is not recommended to use this method to the cluster server. With this configuration, if a node is turned off, the SCSI bus will not be able to terminate correctly and cannot operate normally. The storage box storage cartridge also has an internal termination, and if the storage cartridge is located at the end of the SCSI bus, the internal termination can be used to terminate the SCSI bus. It should be closed. The Y cable Y cable can be used to connect the device located at the end of the SCSI bus. The external active terminal can then be attached to a branch of the Y cable to terminate the SCSI bus. This termination method requires disabling or removing any internal terminals that the device may have.
Figure 27 illustrates the correct physical connection of the SCSI cluster.
Figure 27: Schematic diagram of SCSI cluster hardware configuration. View larger image.
Note: For any device located at the end of the shared bus, it must be disabled inside. The Y cable and active end connector are recommended end methods, because even if the node is not connected, they can also provide end.
Storage area network considerations
In the Windows Server 2003 server cluster, there are two types of fiber-based storage support methods: arbitration ring and switching architecture.
Important: When evaluating these two types of Fiber Channel implementation, please read the supplier's documentation and make sure you understand the specified features and restrictions of each implementation.
Although the term fiber channel suggests the application of fiber optic technology, it is also allowed to interconnect using copper coaxial cables.
Arbitration ring (FC-Al)
Fiber Channel Arbitration Ring (FC-Al) is a set of nodes and devices that are connected to each other to form a single loop. FC-AL provides an economic approach that can be connected to up to 126 devices into a single network. With SCSI, the FC-Al server cluster combined with the hub can support up to two nodes. Figure 28 is a schematic illustration of FC-Al.
Figure 28: FC-Al connection
FC-AL provides a solution with a relatively static configuration to a dual node and a small amount of equipment. All devices on the loop share media, and any packets transmitted from one device to another must pass all intermediate devices.
If a two-node server cluster meets your needs for availability, the FC-Al deployment will have the following advantages:
The cost cost is relatively low. The loop can be extended to increase storage (although nodes cannot be added). For Fiber Channel suppliers, this loop is easy to develop.
The disadvantage of the loop is that it is difficult to deploy in the organization. Since each of the devices on the loop share the same medium, the overall bandwidth in the cluster is narrowed. Some organizations may also be limited by 126 devices.
Exchange architecture (FC-SW)
For any cluster of more than two nodes, Fiber Channel Switching Architecture (FC-SW) is the only supported storage technology. In FC-SW, the device is connected in a multi-topological structure using a Fiber Channel switch (as shown in Figure 29).
Figure 29: FC-SW connection View larger image.
When a node or device communicates with another node or device in the FC-SW, the communication source and the communication target object establish a point-to-point connection (similar to a virtual circuit) and communicate directly. The architecture itself will route the communication source to the communication target object. In FC-SW, the medium is not shared. Any device can communicate with any other device and communicate with the highest bus speed. This is a fully scalable business solution, so it is strongly recommended to deploy with the server cluster. FC-SW is the main technique used in SAN. Other advantages of FC-SW are easy to deploy to support millions of devices, as well as a switch that provides fault isolation and replacement routes. In addition, there is no shared medium in FC-Al, allowing communication faster. However, we must know that FC-SW is difficult to develop for suppliers, and the price of switches is expensive. Suppliers must also consider interoperability issues between components from different suppliers or manufacturers.
Combined with server clusters use SAN
For any large cluster deployment, it is recommended that you use the Storage Area Network (SAN) for data storage. While smaller SCSI and independent Fiber Channel storage devices can operate along with server clusters, SAN can provide super unmatless ability.
SAN is an interconnect device (such as a disk and tape) and a server that connects to regular communication and data transfer infrastructure (FC-SW in Windows Server 2003 clusters). SAN allows multiple servers to access storage pools, and any server can potentially access any storage units.
The information in this section provides an overview to help you use SAN technology in the Windows Server 2003 cluster. For additional information about deploying a server cluster on a SAN, see the "Windows Cluster: Storage Area Network" link in the Web Resources page (located at http://www.microsoft.com/windows/reskits/webresources/).
Please note: Provides vendors from SAN Architecture Components and Software Management Tools with a range of tools that can be used to build, configure, monitor, and manage SAN architectures. Please contact your SAN vendor to learn more about your special SAN solution.
SCSI reset
Earlier versions of the Windows server cluster assume that all communications for shared disks should be used as a separate SCSI bus. This approach is slightly destructive and does not utilize the advanced features of the Fiber Channel to increase arbitration performance and reduce communication interruption.
One key improvement in Windows Server 2003 is that the cluster service issues a command to interrupt a reservation, while the Storport driver can reset the target or device reset for the disk on the fiber channel topology. In the Windows 2000 server cluster, the SCSI RESET (reset) of the entire bus range has been released. This results in all devices on the bus to be disconnected. When publishing SCSI RESET, many times are spent on a device that may not need to be reset, for example, the CHALLENGER node may already have a disk.
Reset in Windows 2003 is performed in the following order:
1. 1. Target logic unit number (LUN) 2. 2. Target SCSI ID 3. 3. SCSI RESET for the entire bus
Note: The target reset requires the functions in the Host Bus Adapter (HBA) driver. The driver must be written for STORPORT instead of SCSIPORT. The driver using SCSIPORT will be like using Challenge and Defense in Windows 2000. Please contact HBA manufacturer to determine if it supports Storport.
SCSI command
Cluster services use the following SCSI commands:
SCSI RESERVE: This command is sent by the host bus adapter or controller to maintain ownership of the SCSI device. In addition to the preserved host bus adapter (ie, the initiator) issued, the preserved device rejects all commands from other host bus adapters. If you issue a bus range SCSI reset command, the reservation will be lost. SCSI RELEASE: This command is issued by the host bus adapter with all rights; release the SCSI device to keep other host bus adapters. SCSI RESET: This command will interrupt the reservation on the target device. Sometimes the command generally refers to "bus reset". The same control code can also be used for "Fiber Channel". These parameters are defined in this partner's article:
309186 Cluster Services How to get ownership of disk disk disk
317162 Supported Fiber Channel Configuration
The following sections outline some SAN concepts that directly affect server cluster deployment.
HBA
The host bus adapter (HBA) is an interface card that connects the cluster node to the SAN, which is similar to the network adapter to connect the server to a typical Ethernet network. However, HBA is more incapable to configure (unless HBA is pre-configured by SAN vendor). HBA in all nodes must be exactly consistent, and must be the same driver and firmware version.
Partition and LUN shield
Partitions and LUN masks are the foundation of SAN deployment, especially when they are related to Windows Server 2003 cluster deployment.
Partition
Many devices and nodes can be attached to SAN. It is important to control which hosts can access specific devices due to data stored in a single cloud or storage entity. The regional division allows administrators to partition the device in the logical volume, reserved the device in a volume to be used for the server cluster. This means that all interactions between the logical storage volumes and the devices are isolated within the zone boundary; other SAN non-cluster members are not affected by cluster behavior.
Figure 30 performs a logical description (A area and B region) for two SAN regions, each of which includes a memory controller (S1 and S2, respectively).
Figure 30: Partition View larger image.
In this implementation, nodes A and Node B can access data from the storage controller S1, but the node c is not. Node C can access data from the storage controller S2.
Partitions need to be implemented in the hardware level (incorporating controllers or switches) without being implemented by software. The main reason is that the partition is also a security mechanism for SAN-based clusters, as unauthorized servers cannot access devices in the area (access control is performed by the switch in the architecture, so the host adapter cannot access unconfigured devices ). If the partition is performed by software, if the software component fails, the cluster will not be safe.
In addition to providing clustering security, the partition also limits the traffic in a given SAN environment. The traffic between the port is only routed into the architecture located in the same area.
LUN shield
LUN is a logical disk defined in the SAN. The server cluster found that the LUN will give it a physical disk. The LUN shield is performed at the controller level, which allows you to define the relationship between the LUN and the cluster node. The storage controller typically provides a method of creating an access control of the LUN hierarchy, which allows for a given LUN to access one or more hosts. By providing this access control on the storage controller, the controller itself can enforce access policies to the device.
LUN shields provide a more finely grainable security than partitions, as LUN provides a partition method in the port level. For example, many SAN switches allow overlapping areas to enable a storage controller in a plurality of regions. Multiple clusters in multiple regions can share data on the controllers in these areas. Figure 31 illustrates such a scheme.
Figure 31: The storage controller in multiple regions View larger image.
The LUNs used in the A cluster can be shielded or hidden from the B cluster, so that only the authorized users can access the data on the shared storage controller. Integrated deployment requirements for Windows Server 2003 cluster and SAN
The following list focuses on some deployment requirements that need to be observed when a comprehensive deployment server cluster and SAN storage solutions are required. For more complete information on the combined server cluster, you can get from a white paper, see "Windows Cluster: Storage Area Network" link in the Web Resources page (located at http://www.microsoft.com/windows / Reskits / WebResources /).
Each cluster on the SAN must be deployed in its own area. The cluster is used to protect the magnetic disk access to other clusters in the same area have a negative impact. By using partitions, the cluster traffic is isolated from other nodes or non-node traffic, there is no problem with mutual interference.
All HBAs in a single node must be the same type and have the same firmware version. Many storage and switch vendors require all HBAs on the same area (sometimes must be in the same architecture), and these HBAs must share these features.
The drivers of all storage devices in the cluster, and the driver of the HBA device must have the same software version.
Multiple nodes are not allowed to access the same storage device unless they are located in the same cluster.
Do not place tape devices in the same area where cluster disk storage devices are located. Tape devices may misunderstand the bus to sleep when inappropriate (as in large backups).
Guidelines for deploying SANs in conjunction with Windows Server 2003 server cluster
In addition to the SAN requirements discussed above, it is highly recommended that you follow the following practices in server cluster deployment:
In a highly available storage architecture, you need to deploy clusters through multiple HBAs. In these cases, it is always loaded into the multipath driver software. If the I / O subsystem discovers two HBA, it assumes that it is different bus and enumerates all devices, which will be regarded as different devices on each bus. At the same time, the host will see multiple roads to one disk. The failure to load the multipath driver will disable the second device because the operating system will consider that they have two separate disks with the same signature.
Do not disclose the hardware snapshot of the cluster disk in the nodes in the same cluster. Hardware snapshots must enter servers other than server clusters. Many controllers provide a snapshot of the controller level, which can be disclosed as a fully independent LUN to the cluster. When multiple devices have the same signature, cluster performance will drop. If the snapshot is returned to the original node through the original disk online, the I / O subsystem will try to rewrite the signature. However, if the snapshot is disclosed to another node in the cluster, the cluster service does not recognize it as a different disk, so data may cause data to be damaged. Although this is not a specific SAN problem, the controller that provides this feature is usually deployed in the SAN environment.
For other related information, see the following article in Microsoft Knowledge Base:
301647 Cluster Service Improvement of Storage Area Network
304415 Support for multiple clusters connected to the same SAN device
280743 Sites for Windows Clusters and Geographic Location
Back to top
Related Links
For more information, see the following resources:
Microsoft Cluster Services Installing Resources, located at http://support.microsoft.com/?id=259267 Arbitration Drive Configuration Information, is located at http://support.microsoft.com/?id=280345 Recommended Cluster Server Special Heartbeat Configuration, Http://support.microsoft.com/?id=258750 Network fault detection and recovery in server clusters, located at http://support.microsoft.com/?id=242600 How to change the arbitration disk specified, located in http: // Support.microsoft.com/?id=280353 Microsoft Windows Cluster: Storage Area Network, located in http://www.microsoft.com/windows.netserver/techinfo/overview/san.mspx Windows Server 2003 Dissive Cluster, located on the network requirements and best practices http://www.microsoft.com/windows.netserver/techinfo/overview/clustergeo.mspx server cluster, located http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies /clustering/clstntbp.mspx For the latest information about Windows Server 2003, see Windows 2003 Server website: http://www.microsoft.com/windowsserver2003/default.mspx