1. When using RM6 and you have the max. number of A3x00 allowed and you
start a LUN creation on one A3x00. Then you select another A3x00 and start
another LUN creation etc etc. You might see an hour glass icon sit there for
sometime depending on the number of A3x00's. When using RM6 to format LUNs
on multiple sonoma's, this is normal behavoir. The RM6 window will come back
and be updated as it should be. This is not specific to Sun Cluster 2.1.
Also, if you try to kill the RM6 Configuration Window that is doing the
formatting of LUN(s). A window will popup telling that you have formats in
progress. This is not specific to Sun Cluster 2.1.

2. When formatting a LUN, if the host gets a unexpected scsi reset, it will
restart the LUN formatting from the beginning. This is not specific to Sun
Cluster 2.1.


3. When balancing LUNs on a A3x00 and you reboot the other node, it is
possible that the LUN(s) balancing between controllers will hang momentarily
from the GUI before finishing the LUN(s) balancing. 


4. When using A3x00 with Sun Cluster 2.1 and its HA components. Since VM
2.5 is installed, it has DMP. Even though DMP is not supported with SC2.1.
You must turn off DMP due to its own dual-pathing algorithims that can cause
issues for RDAC. To tun off DMP, follow the steps below. This is not specific
to Sun Cluster 2.1.

Note: Be sure to do these steps first: 1. umount all file systems created 
on Volume Manager volumes 2. stop the Volume Manager (use vxdctl stop)

1. remove the "vxdmp" driver from the "/kernel/drv" directory(or rename it)

	rm /kernel/drv/vxdmp

2. edit /etc/system, and remove the line:

	forceload: drv/vxdmp

3. Remove the Volume Manager DMP files:

 	rm -rf /dev/vx/dmp /dev/vx/rdmp

4. symbolically link /dev/vx/dmp to /dev/dsk

	ln -s /dev/dsk /dev/vx/dmp

5. symbolically link /dev/vx/rdmp to /dev/rdsk

	ln -s /dev/rdsk /dev/vx/rdmp


6. Reboot the host


5. If you are mirroring between A3X00's(double or triple mirroring), you
should throttle down the rdac retry count. This will enable rdac to retry
on its alternate path depending on the number of I/O's queued up and pass 
on to sd that it has failed and therefore able to detach the plex and continue
I/O over on the other A3x00. There is a small script(5 lines) to set this variable and
this variable will be included in the next spin of RM6 and be a variable in the rmparams
file. See bullet 13 for update.

6. When doing parity checking on a Raid 1, 3 or 5 LUN filesystem and your host panics and reboot,
all that will happen to parity check is that it will be aborted. If you are doing parity check and
you lose all power to your A3x00, a window will popup within RM6 telling you that parity check
has been terminated and will give you a couple of possible reasons.(LUN degraded etc etc).

7. When placing a controller offline, the associated LUN(s) own by that controller, will
be moved over to the other active controller. This can be verified with a Module Profile.
This will not disturb any Volume Manager configuration within your cluster. The only issue is
the LUN(s) that you moved due to placing the controller offline, and you have those LUN(s)
mounted as part of /etc/vfstab. Will not be mounted on a reconfig reboot. You will have to make the
needed changes to /etc/vfstab file. If you did a regular reboot, then it will still go over the old path,and rdac will take care of this and is transparent to the user. This is not specific to Sun Cluster 2.1.

8. If you are performing the operation "failing a drive", you MUST make sure that when doing
this that you do not cause problems with you cluster configuration. This is not specific to
Sun Cluster 2.1

9. Do not perform any of the following operations without going thru the Recovery Guru
first: Manual Recovery of Drives, Controllers and Logical Units. Doing the Manual Recovery
before running the Recovery Guru can cause problems with your A3x00. This is documented with
the RM6 documentation and is not specific to Sun Cluster 2.1.

10. When performing firmware upgrades of your HW Raid Controllers, DO NOT reboot the other
node, the scsi bus reset that will happen because of the reboot of the other node can cause
problems with you firmware upgrade.

11. By default Volume Manager does Hot Relocation on the free space within your A3x00/A1000/A5000
or any other disk subsystem under controller by Volume Mananger. As an example of an extreme case,
if you were triple mirroring A3x00 and you LOST a A3x00, it would Hot Relocate the ENTIRE A3x00.
The only way to recover from this operation would be to restore OR go to the following URL:
http://spider.aus/utils/utils-vxvm.html and there are two utilities that you can use to recover
from a Hot Relocation operation. The name of the two utilities are: vxreconstruct and 
vxunrelocate. Or you can disable Hot Relocation by using the following command:

vxedit set reserve=on Volume_Manager_Disk

Note: Hot Relocation works on failed subdisk.

12. The only time Volume Managers Hot Sparing would kick in is when you lost the whole
VM disk, there is a tremendous amount of I/O activity during a Hot Spare operation. If you
need Hot Spare Disk(s), it is recommended to do this within the Hardware Raid Controllers.
One is to cut down on the overhead between the host and the Raid Subsystem. It is strongly
recommended to make you Hot Spare Disk(s) the biggest size as the largest disk you have in
your Raid Subsystem. As an example, if you had a 4+1 Raid 5 and you were using 9gb drives, your
Hot Spare Disk should be at least 9gb in size. If you use a smaller disk, there will be no Hot
Sparing for that LUN and you'll be running in degraded mode.

Items 11 and 12 are not specific to Sun Cluster 2.1


13. It is recommended that you install patch 106513-01, and then upgrade
your firmware on all your controllers to 2.5.2.14. Then install patch 106707-01,
which helps deal with RDAC retry counts on failed I/O's as well as controller
issues.