Sun storagetek data snapshot software




















The Array supports Fibre Channel-only connections to the data host. The information in this section applies only to data hosts with Fibre Channel connections. HBAs must be ordered separately, from Sun or their respective manufacturers. You must install data host multipathing software on each data host that communicates with the Sun StorageTek Array. Download operating system updates from the web site of the operating system company. Sun Cluster versions SC 3. The Array supports SAS-only connections to data hosts.

The information in this section applies only to data hosts with SAS connections. See Table 7. To install these patches, complete the following unique steps to upgrade the firmware for the Sun StorageTek Arrays. Download CAM patch from the Sun download center. Stop all IO from all of the connected data hosts. Unmount any file systems associated with the volumes on the array. Use system administration commands for your operating system CLI to unmount the volumes.

Download or copy the patch to the software installation directory. Go to the Storage System Summary page and select the arrays to be upgraded. When the management software indicates that the firmware upgrade is complete, restart each array controller one at a time.

Turn on the power switch on the controller. When the controllers are back online, use the management software to verify that the volumes are assigned to the active controller. The Volume Details page allows you to select the owning controller. You will need to correct all zoning to match these new WWPNs. Remount any file systems associated with the volumes on the array. The following sections provide information about known issues and bugs filed against this product release:.

If a recommended workaround is available for a bug, it follows the bug description. Bug - For Solaris, volume path did not come back online after controller failover. In this case, 32 volumes were created and mapped to the host. The volume path did not come back online after the controller return back to optimal.

When the alternate controller went offline, both paths were offlined. Workaround - A Solaris patch is in the works. Workaround - This issue does not happen with the Solaris 10 multipath driver pending release. Also, do not use the array as a boot device. Bug - Removing a SAS controller results in outdated information on the Controller Details page in the management software. The status correctly reports the controller as removed.

Replacing the controller corrects the state. Bug - For SAS, using the CLI to create a new volume on an array with high data input and output returns in a timeout and an error code of 4. This section describes general issues related to the Sun StorageTek Series Array hardware and firmware. Doing so will result in serious problems in array operations, including:. They will have to delete the array data. This is working as designed. Under a Direct Connect environment, rebooting the connected data host will cause a FC link down alarm.

As soon as the link is back up, the Alarm should clear and the LED should turn off. Under a switch environment, this will not occur unless a cable is unplugged from the switch, the switch is rebooted, or is having errors. Rebooting the host will not cause the link to go down because the link from the controller SFP to the switch will remain 'up'.

Bug - A cable pull returned to the wrong HBA port can cause a panic. The cause is known and a fix is being worked on. Workaround - Try to ensure that you plug the cable back into the port it was originally in if your system is running. If you need to move the cable to a different port, try to do so when the system is not online.

Bug - A firmware upgrade can lock volumes longer than the upgrade process indicates. The array can report the upgrade completed and show an optimal state but the process can still lock the volumes. The upgrade completion timing in the management software will be evaluated. Sun StorageTek Data Snapshot software conserves costly disk space with real-time protection of critical volumes. Sun StorageTek Volume Copy software quickly creates independent copies of production volumes.

The protection group of the replicated volume is avspg. The device group avsset is created by using Solaris Volume Manager software, but any type of device group supported by the Oracle Solaris Cluster Geographic Edition software can be used with fallback snapshots.

Perform Steps 1 and 2 of the following procedure on one node of either cluster. Perform Step 3 on one node of both clusters. Perform Step 4 on one node of the cluster that is currently secondary for the device group.

Verify which cluster is the current primary and which is the current secondary for the device group containing the volume for which you are enabling a fallback snapshot:.

Identify the resource group used for the replication of the device group avsset. It will have a name of the form protectiongroupname -rep-rg and it will contain a resource named devicegroupname -rep-rs , as described in Sun StorageTek Availability Suite Replication Resource Groups.

In this example the replication resource group is called avspg-rep-rg , and the replication resource is called avsset-rep-rs. To enable the fallback snapshot, perform this step on one node of the cluster that is currently secondary for the device group.

Attach the snapshot volume to the secondary replicated volume. In this command you will again specify the master volume, shadow volume and bitmap shadow volume, separated by spaces:. You can locate the specific entry for the fallback snapshot you want to disable by using the clresource show command on the devicegroupname -rep-rs resource.

This fallback snapshot was enabled in Example Perform Steps 3 and 4 on one node of both clusters. Perform Step 5 on one node of the cluster that is currently secondary for the device group. Verify which cluster is the current primary and which is the current secondary for the device group containing the volume for which you are disabling a fallback snapshot:. Detach the snapshot volume from the replicated data volume.

For more information about this command, refer to the cldevicegroup 1CL man page. For more information about this command, see the cldevicegroup 1CL man page. Note - You must specify the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Oracle Solaris Cluster software and the Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster.

You must not mount data on the secondary cluster because data on the primary will not be replicated to the secondary cluster. Adding this resource ensures that the necessary file systems are remounted before the application is started.

This example configures a highly available cluster global file system for Solaris Volume Manager volumes. This example assumes that the resource group apprg1 already exists. When a device group that is controlled by the Sun StorageTek Availability Suite software is added to a protection group, the Geographic Edition software creates a special replication resource for that device group in the replication resource group.

By monitoring these replication resource groups, the Geographic Edition software monitors the overall status of replication. One replication resource group with one replication resource is created for each protection group. The replication resource in the replication resource group monitors the replication status of the device group on the local cluster, which is reported by the Sun StorageTek Availability Suite remote mirror software. Note - Do not directly update a replication resource group or its resources or add them directly to a protection group.

During an outage, when a secondary replicated volume is unavailable, the Sun StorageTek Availability Suite software logs changes made to the primary volume. Once replication is restarted the secondary volume is resynchronized with the primary volume.

A failure during the resynchronization might leave the secondary volume in an inconsistent state, which can result in file system corruption of that volume.



0コメント

  • 1000 / 1000