Thursday, September 29, 2011

Linux LVM with Veritas DMP as multipathing

"dmp_native_support" parameter is required to turn on in order for the DMP to provide Linux LVM VG multipathing.

# vxdmpadm settune dmp_native_support=on
# vxdmpadm gettune  |grep support
dmp_native_support                       on              off
# lvmdiskscan |grep emc |grep LVM
  /dev/vx/dmp/emc_clariion0_1000   [       15.00 GB] LVM physical volume
  /dev/vx/dmp/emc0_05ff            [        1.01 GB] LVM physical volume
  /dev/vx/dmp/emc0_05fe            [        1.01 GB] LVM physical volume
  /dev/vx/dmp/emc0_05fd            [        1.01 GB] LVM physical volu

Wednesday, September 28, 2011

Linux Storage Foundatione Oracle RAC diskgroup and volume failure.

When all paths are disconnected/reconnected to SF Oracle RAC cluster node, some strange behaviour might occur. The vmax_sfracdg diskgroup appeared to be stale,  some disks like emc_0604 was showing 2 entries in the diskgroup. The diskgroup failed to deport and the volume denied to be stopped. The volume was still active in the kernel as explained in Symantec site. Reboot system is the only solution.


# vxdisk list -o alldgs |grep vmax_sfrac
emc0_0604    auto:cdsdisk    emcpoweral   vmax_sfracdg online shared
emc0_0604    auto:cdsdisk    -            (vmax_sfracdg) online shared
emc0_0605    auto:cdsdisk    emcpowerak   vmax_sfracdg online shared
emc0_0606    auto:cdsdisk    emcpoweraj   vmax_sfracdg online shared
emc0_0606    auto:cdsdisk    -            (vmax_sfracdg) online shared
emc0_0608    auto:cdsdisk    emcpowerah   vmax_sfracdg online shared

# vxdg deport vmax_sfracdg
VxVM vxdg ERROR V-5-1-584 Disk group vmax_sfracdg: Some volumes in the disk group are in use

# vxvol -g vmax_sfracdg stopall
VxVM vxvol ERROR V-5-1-1220 Volume sfracvol1 is currently open or mounted
VxVM vxvol ERROR V-5-1-1220 Volume sfracvol2 is currently open or mounted
VxVM vxvol ERROR V-5-1-1220 Volume sfracvol3 is currently open or mounted
VxVM vxvol ERROR V-5-1-1220 Volume sfracvol4 is currently open or mounted
VxVM vxvol ERROR V-5-1-1220 Volume sfracvol5 is currently open or mounted

 #vxprint -g vmax_sfracdg -m sfracvol1 |grep devopen
        devopen=on

VCS NOTICE V-16-20011-1021 CFSMount error in Storage Foundation Oracle RAC

 If CFSMount is unable to mount the resources in the service group, it might be the previous mounting is still in stale state during the previous failure. Solution is to remove the stale mounting and online the resouce again.


# df -k |grep error
df: `/vmax_sfracfs1': Input/output error
df: `/vmax_sfracfs3': Input/output error
df: `/vmax_sfracfs5': Input/output error


/var/VRTSvcs/log/CFSMount_A.log
sgelxha196:/var/VRTSvcs/log # less CFSMount_A.log
2011/09/28 16:40:30 VCS NOTICE V-16-20011-1021 CFSMount:vmax_sfrac3:monitor:File System Disabled: MountPoint: /vmax_sfracfs3
2011/09/28 16:40:30 VCS NOTICE V-16-20011-1021 CFSMount:vmax_sfrac5:monitor:File System Disabled: MountPoint: /vmax_sfracfs5
2011/09/28 16:40:30 VCS NOTICE V-16-20011-1021 CFSMount:vmax_sfrac1:monitor:File System Disabled: MountPoint: /vmax_sfracfs1

Solution:
# umount -f /vmax_sfracfs1
# umount -f /vmax_sfracfs3
# umount -f /vmax_sfracfs5

# hagrp -online vmax_sfrac -sys host1

Tuesday, September 27, 2011

Verify key registration for Veritas disks

Sometime there are disks SCSI3-PR compliance or vxfentsthdw script failure for vxfencing. Excute below to verify the keys on all disks.

Command to check all the keys and SCSI-3 PR compliance disks for VCS
vxdisk list|grep emc0 |awk '{print "/dev/vx/rdmp/"$1}' > /tmp/disks
vxdisk list|grep emc0 |awk '{print "/dev/vx/dmp/"$1}' > /tmp/disks - (Try with dmp path only if rdmp fails)
vxfenadm -a -k 1 -f /tmp/disks - register key 1 to the all the disks
vxfenadm -g all -f /tmp/disks - list the key register
vxfenadm -s all -f /tmp/disks - clear the key

Eg. Command to query registeration keys in coordinator diskgroup

host85-root>vxdisk -o alldgs list |grep vxfencoorddg |awk '{print "/dev/vx/rdmp/"$1}' > /tmp/disks
host85-root>vxfenadm -s all -f /tmp/disks
Device Name: /dev/vx/rdmp/pp_emc0_4
Total Number Of Keys: 3
key[0]:
[Numeric Format]: 86,70,68,67,55,65,48,48
[Character Format]: VFDC7A00
* [Node Format]: Cluster ID: 56442 Node ID: 0 Node Name: host85
key[1]:
[Numeric Format]: 86,70,68,67,55,65,48,49
[Character Format]: VFDC7A01
* [Node Format]: Cluster ID: 56442 Node ID: 1 Node Name: host83
key[2]:
[Numeric Format]: 86,70,68,67,55,65,48,50
[Character Format]: VFDC7A02
* [Node Format]: Cluster ID: 56442 Node ID: 2 Node Name: host81
Device Name: /dev/vx/rdmp/pp_emc0_6
Total Number Of Keys: 3
key[0]:
[Numeric Format]: 86,70,68,67,55,65,48,48
[Character Format]: VFDC7A00
* [Node Format]: Cluster ID: 56442 Node ID: 0 Node Name: host85
key[1]:
[Numeric Format]: 86,70,68,67,55,65,48,49
[Character Format]: VFDC7A01
* [Node Format]: Cluster ID: 56442 Node ID: 1 Node Name: host83
key[2]:
[Numeric Format]: 86,70,68,67,55,65,48,50
[Character Format]: VFDC7A02
* [Node Format]: Cluster ID: 56442 Node ID: 2 Node Name: host81
Device Name: /dev/vx/rdmp/pp_emc0_5
Total Number Of Keys: 3
key[0]:
[Numeric Format]: 86,70,68,67,55,65,48,48
[Character Format]: VFDC7A00
* [Node Format]: Cluster ID: 56442 Node ID: 0 Node Name: host85
key[1]:
[Numeric Format]: 86,70,68,67,55,65,48,49
[Character Format]: VFDC7A01
* [Node Format]: Cluster ID: 56442 Node ID: 1 Node Name: host83
key[2]:
[Numeric Format]: 86,70,68,67,55,65,48,50
[Character Format]: VFDC7A02
* [Node Format]: Cluster ID: 56442 Node ID: 2 Node Name: host81

Thursday, September 22, 2011

VCS ASM/Oracle agent for single oracle database instance with ASM

Steps to allow VCS Oracle Agent to manage Single Oracle Database Instance with ASM:


2. Oracle ASM and Database single instance setup

 This configuration consists of 3 VCS cluster nodes. VCS ASM and Oracle Agent will manage the ASM and Oracle resources across the 3 nodes with only 1 single node posses the single database instance at any point of time. 

2.1 Select and Install Grid Infrastructure Software only and execute below for standalone server. (1st and subsequent nodes)

/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

2.2  Configure Oracle ASM with “asmca”. (1st node only)

2.3 Select and Install Oracle Database Software only  (1st and subsequent nodes)

2.4 Configure listener with “netca” (1st and subsequent nodes)

2.5 Configure Oracle DB with “dbca” (1st node only)

2.6 Create ASM and add ASM on remote host (subsequent nodes)


1st node (host85)
host85-oracle>export ORACLE_SID=+ASM
host85-oracle>export ORACLE_HOME=/u01/app/11.2.0/grid        
host85-oracle>asmcmd
ASMCMD> ls
REGISTRY.253.762436131
ASMCMD> spget +DATA/asm/ASMPARAMETERFILE/REGISTRY.253.762436131
+DATA/asm/asmparameterfile/registry.253.762436131
ASMCMD> spcopy +DATA/asm/ASMPARAMETERFILE/REGISTRY.253.762436131 /u01/app/11.2.0/grid/dbs/spfileASM.ora

2nd node (host83) and subsequent nodes (grid owner)
host85-oracle>crsctl stop resource ora.asm –f
host85-oracle>scp -p spfileASM.ora 10.10.10.83:/u01/app/11.2.0/grid/dbs
host83-oracle>srvctl add asm -l LISTENER –p\ /u01/app/11.2.0/grid/dbs/spfileASM.ora -d '/dev/rdisk/*'
host83-oracle>crsctl start resource ora.cssd
host83-oracle>crsctl start resource ora.asm
host83-oracle>export ORACLE_SID=+ASM
host83-oracle>export ORACLE_HOME=/u01/app/11.2.0/grid        
host83-oracle>sqlplus / as sysasm
SQL> alter diskgroup DATA mount;

host83-oracle>crsctl status resource –t  (Output display register ASM DG show in RED)
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       host83                                    
ora.LISTENER.lsnr
               ONLINE  ONLINE       host83                                    
ora.asm
               ONLINE  ONLINE       host83                 Started             
ora.ons
               OFFLINE OFFLINE      host83                                    
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       host83                                    
ora.diskmon
      1        ONLINE  ONLINE       host83                                    
ora.evmd
      1        ONLINE  ONLINE       host83        

2.7 Enable Oracle Cluster Synchronization Service daemon to start automatically (1st and subsequent nodes)

host85-oracle>srvctl disable asm
host85-oracle>crsctl modify resource ora.asm -attr ENABLED=0
host85-oracle>crsctl modify resource ora.cssd -attr AUTO_START=always

2.8 Setting MLOCK priviledge for DBA user (1st and subsequent nodes)

host85-root>setprivgrp dba MLOCK
host85-root>echo "dba MLOCK" > /etc/privgroup
host85-root>/usr/bin/getprivgrp dba

2.9 Copy the $ORACLE_BASE/admin/SID (from 1st node to subsequent nodes)

host85-root>pwd
/u01/app/oracle/admin
host85-root>tar xvf - HPUX | ssh 10.10.10.83 "cd /u01/app/oracle/admin; tar \ xvf - "

2.10 Copy the $ORACLE/dbs (from 1st node to subsequent nodes)

host85-root>pwd
/u01/app/oracle/product/11.2.0/dbhome_1
host85-root>tar cvf - dbs | ssh 10.10.10.81 "cd /u01/app/oracle/product/11.2.0/dbhome_1; tar xvf -"

3 VCS ASM and VCS Service Group configuration


This diagram shows all VCS service groups and their resource types as well as their interlink relationship.

3.1 Setup ASM Instance Service group and add ASM Instance Resource type in the group.

hagrp -add  asminstgrp
hagrp -modify asminstgrp SystemList  host85 0 host83 1 host81 2
hagrp -modify asminstgrp AutoStartList host85 host83 host81
hagrp -modify asminstgrp Parallel 1

hares -add  ASM_asminst  ASMInst  asminstgrp
hares -modify ASM_asminst Critical 1
hares -modify ASM_asminst MonitorOption  0
hares -modify ASM_asminst Sid  +ASM
hares -modify ASM_asminst Owner  oracle
hares -modify ASM_asminst Home  /u01/app/11.2.0/grid
hares -modify ASM_asminst DBAUser  
hares -modify ASM_asminst DBAPword  
hares -modify ASM_asminst Pfile  
hares -modify ASM_asminst StartUpOpt  
hares -modify ASM_asminst ShutDownOpt  
hares -modify ASM_asminst EnvFile  
hares -modify ASM_asminst Encoding  
hares -modify ASM_asminst Enabled 1

3.2  Setup oraasm_group service group for single Oracle DB and ASM instance. ASMDG, IP, NIC and Netlsnr resource type will be add into this service group. Oracle resource type also require if Oracle DB is configure.


hagrp -add  oraasm_grp
hagrp -modify oraasm_grp SystemList  host85 0 host83 1 host81 2
hagrp -modify oraasm_grp AutoStartList host85
hagrp -modify oraasm_grp Parallel 0

hares -add  ASM_asmdg  ASMDG  oraasm_grp
hares -modify ASM_asmdg Critical 1
hares -modify ASM_asmdg Sid  +ASM
hares -modify ASM_asmdg Owner  oracle
hares -modify ASM_asmdg Home  /u01/app/11.2.0/grid
hares -modify ASM_asmdg DBAUser  
hares -modify ASM_asmdg DBAPword  
hares -modify ASM_asmdg DiskGroups  DATA
hares -modify ASM_asmdg EnvFile  
hares -modify ASM_asmdg Encoding  
hares -modify ASM_asmdg Enabled 1

hares -add  IP_oraprod  IP  oraasm_grp
hares -modify IP_oraprod Critical 1
hares -modify IP_oraprod ArpDelay  1
hares -modify IP_oraprod IfconfigTwice  0
hares -modify IP_oraprod PrefixLen  0
hares -modify IP_oraprod Device  lan2
hares -modify IP_oraprod Address  10.10.10.146
hares -modify IP_oraprod NetMask  255.255.252.0
hares -modify IP_oraprod Options  
hares -modify IP_oraprod RouteOptions  
hares -modify IP_oraprod Enabled 1

hares -add  NIC_oraprod  NIC  oraasm_grp
hares -modify NIC_oraprod Critical 1
hares -modify NIC_oraprod NetworkType  ether
hares -modify NIC_oraprod PingOptimize  1
hares -modify NIC_oraprod Protocol  IPv4
hares -modify NIC_oraprod Device  lan2
hares -modify NIC_oraprod NetworkHosts host85  host83  host81
hares -modify NIC_oraprod Enabled 1

hares -add  LSNR_oraprod_lsnr  Netlsnr  oraasm_grp
hares -modify LSNR_oraprod_lsnr Critical 1
hares -modify LSNR_oraprod_lsnr Listener  LISTENER
hares -modify LSNR_oraprod_lsnr MonScript  ./bin/Netlsnr/LsnrTest.pl
hares -modify LSNR_oraprod_lsnr Owner  oracle
hares -modify LSNR_oraprod_lsnr Home  /u01/app/11.2.0/grid
hares -modify LSNR_oraprod_lsnr TnsAdmin  /u01/app/11.2.0/grid/admin
hares -modify LSNR_oraprod_lsnr EnvFile  
hares -modify LSNR_oraprod_lsnr LsnrPwd  
hares -modify LSNR_oraprod_lsnr Encoding  
hares -modify LSNR_oraprod_lsnr Enabled 1

ares -add  ORA_oraprod  Oracle  oraasm_grp
hares -modify ORA_oraprod Critical 1
hares -modify ORA_oraprod StartUpOpt  STARTUP_FORCE
hares -modify ORA_oraprod ShutDownOpt  IMMEDIATE
hares -modify ORA_oraprod AutoEndBkup  1
hares -modify ORA_oraprod MonScript  ./bin/Oracle/SqlTest.pl
hares -modify ORA_oraprod MonitorOption  0
hares -modify ORA_oraprod ManagedBy  ADMIN
hares -modify ORA_oraprod Sid  HPUX
hares -modify ORA_oraprod Owner  oracle
hares -modify ORA_oraprod Home  /u01/app/oracle/product/11.2.0/dbhome_1
hares -modify ORA_oraprod Pfile  
hares -modify ORA_oraprod DBAUser  system
hares -modify ORA_oraprod DBAPword  ameMbmOmhMfoCmpOfoD
hares -modify ORA_oraprod EnvFile  
hares -modify ORA_oraprod User  
hares -modify ORA_oraprod Pword  
hares -modify ORA_oraprod Table   
hares -modify ORA_oraprod Encoding  
hares -modify ORA_oraprod DBName  
hares -modify ORA_oraprod Enabled 1

Add below in main.cf to create the link relationship:
requires group asminstgrp online local firm
IP_oraprod requires NIC_oraprod
LSNR_oraprod_lsnr requires IP_oraprod
LSNR_oraprod_lsnr requires ORA_oraprod
ORA_oraprod requires ASM_asmdg


Sample main.cf
host81-root>cat /etc/VRTSvcs/conf/config/main.cf
include "OracleASMTypes.cf"
include "types.cf"
include "CFSTypes.cf"
include "CRSResource.cf"
include "CVMTypes.cf"
include "Db2udbTypes.cf"
include "MultiPrivNIC.cf"
include "OracleTypes.cf"
include "PrivNIC.cf"
include "SybaseTypes.cf"

cluster hp818385 (
        UserNames = { admin = hqrJqlQnrMrrPzrLqo }
        Administrators = { admin }
        UseFence = SCSI3
        HacliUserLevel = COMMANDROOT
        )

system host81 (
        )
system host83 (
        )
system host85 (
        )

group asminstgrp (
        SystemList = { host85 = 0, host83 = 1, host81 = 2 }
        Parallel = 1
        AutoStartList = { host85, host83, host81 }
        )

        ASMInst ASM_asminst (
                Sid = "+ASM"
                Owner = oracle
                Home = "/u01/app/11.2.0/grid"
                )
        // resource dependency tree
        //
        //      group asminstgrp
        //      {
        //      ASMInst ASM_asminst
        //      }

group oraasm_grp (
        SystemList = { host85 = 0, host83 = 1, host81 = 2 }
        AutoStartList = { host85 }
        )

        ASMDG ASM_asmdg (
                Sid = "+ASM"
                Owner = oracle
                Home = "/u01/app/11.2.0/grid"
                DiskGroups = { DATA }
                )

        IP IP_oraprod (
                Device = lan2
                Address = "10.10.10.146"
                NetMask = "255.255.252.0"
                )

        NIC NIC_oraprod (
                Device = lan2
              NetworkHosts = { "10.10.10.85", "10.10.10.83", "10.10.10.81" }
                )

        Netlsnr LSNR_oraprod_lsnr (
                Owner = oracle
                Home = "/u01/app/11.2.0/grid"
                TnsAdmin = "/u01/app/11.2.0/grid/admin"
                )

        Oracle ORA_oraprod (
                Sid = HPUX
                Owner = oracle
                Home = "/u01/app/oracle/product/11.2.0/dbhome_1"
                DBAUser = "system"
                DBAPword = ameMbmOmhMfoCmpOfoD
                )

        requires group asminstgrp online local firm
        IP_oraprod requires NIC_oraprod
        LSNR_oraprod_lsnr requires IP_oraprod
        LSNR_oraprod_lsnr requires ORA_oraprod
        ORA_oraprod requires ASM_asmdg


        // resource dependency tree
        //
        //      group oraasm_grp
        //      {
        //      Netlsnr LSNR_oraprod_lsnr
        //          {
        //          IP IP_oraprod
        //              {
        //              NIC NIC_oraprod
        //              }
        //          Oracle ORA_oraprod
        //              {
        //              ASMDG ASM_asmdg
        //              }
        //          }
        //      }


Oracle SGA and PGA fine tuning

If the system memory is insufficient due to Oracle memory allocation, 1st link will show how the PGA and SGA memory allocation is calculated. 2nd link will show how to set the memory.

http://www.dba-oracle.com/art_dbazine_ram.htm
http://orafaq.com/wiki/SGA_target

Click here to view the full explanation on Memory Architecture from Oracle for 11gR2.

Wednesday, September 21, 2011

LVM SG transition fails in all paths disabled in Linux

If all the paths to the disks have disabled, the LVM2 vg commands stop
responding and wait until at least one path to the disks is restored. As
Release Notes, it is a known issues.

LVMVolumeGroup agent uses LVM2 commands, this behavior causes online and offline entry points of LVMVolumeGroup agent to time out and clean EP stops responding for an indefinite time. Because of this, the service group cannot failover to another node.

Workaround: You need to restore at least one path.

Tuesday, September 20, 2011

Remove clone_disk flag on VxVM

When VxVM devices are shown as clone disk, below command allows the flag to be cleared.
Refer to http://www.symantec.com/business/support/index?page=content&id=TECH70049 for more detail.

 # vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
emcpoweraa   auto:cdsdisk    emcpowerq    vmax_vcsdg   online clone_disk
emcpowerab   auto:cdsdisk    emcpowerp    vmax_vcsdg   online clone_disk

# for i in `vxdisk list |grep clone_disk | awk '{print $1}'`; do echo $i; vxdisk set $i clone=off; done

# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
emcpoweraa   auto:cdsdisk    emcpowerq    vmax_vcsdg   online
emcpowerab   auto:cdsdisk    emcpowerp    vmax_vcsdg   online

Update VXVM&ASL/APM packages on Linux

Command to update VxVM ASL/APM.

 # ls -l
total 19968
-rw-r--r-- 1 root root   158990 Sep 21 10:50 VRTSaslapm-5.0.000.000-internal_SLES11.x86_64.rpm
-rw-r--r-- 1 root root 20258631 Sep 21 10:50 VRTSvxvm-5.0.000.000-internal_SLES11.x86_64.rpm

 # rpm -Uvh --force *.rpm --nodeps
Preparing...                ########################################### [100%]
stopping vxconfigd
   1:VRTSvxvm               ########################################### [ 50%]
Installing file /etc/init.d/vxvm-boot
vxvm-boot                 0:off  1:off  2:on   3:on   4:on   5:on   6:off
creating VxVM device nodes under /dev
Installing keys for APMs
   2:VRTSaslapm             ########################################### [100%]
Installing keys for APMs
#shutdown -r now

Unzip file failure on HP-UX

If there is a  problem unzip file on HP-UX eg. unzip_hp32 utility provided by Oracle to unzip Oracle product zip files. Try using java "jar" to unzip the file and it works.
Credit to http://www.unix.com/hp-ux/35001-how-unzip-zip-file-unix-without-using-unzip-cmd.html

abc1234-root>./unzip_hpx32 p10098816_112020_HPUX-IA64_3of7.zip        
Archive:  p10098816_112020_HPUX-IA64_3of7.zip
  End-of-central-directory signature not found.  Either this file is not
  a zipfile, or it constitutes one disk of a multi-part archive.  In the
  latter case the central directory and zipfile comment will be found on
  the last disk(s) of this archive.
unzip:  cannot find zipfile directory in one of p10098816_112020_HPUX-IA64_3of7.zip or
        p10098816_112020_HPUX-IA64_3of7.zip.zip, and cannot find p10098816_112020_HPUX-IA64_3of7.zip.ZIP, period.



abc1234-root>/opt/java6/bin/jar xvf p10098816_112020_HPUX-IA64_3of7.zip
.....
.....
inflated: database/stage/productlanguages.properties
 inflated: database/readme.html
abc1234-root>/

Monday, September 12, 2011

Remove PowerPath failure with VxVM on AIX

If PowerPath devices are used by LVM/VxVM/VCS on AIX,  it will fail to uninstall or upgrade PowerPath. Follow this workaround http://www.symantec.com/business/support/index?page=content&id=TECH68947 to resolve the problem.

Remove PowerPath failure with VxVM on Linux

PowerPath removal will fail when the underneath PowerPath devices are used by Veritas Volume Manager. Therefore, it is necessary to remove PowerPath devices from VxVM before removing the PowerPath package.

Steps to remove PowerPath with VxVM on Linux
1. for i in `vxdisk list | grep emcpower |awk '{print $1}'`;do vxdisk rm $i;done
3. #/etc/init.d/PowerPath stop
2. rpm -e EMCpower.LINUX-5.1.0-194

Refer to http://www.symantec.com/business/support/index?page=content&id=TECH70180 for more detail.

Tuesday, September 6, 2011

Create EMC Symmetrix thin devices

This post shows the step how to create thin pool, add and remove components from the pool and how to delete thin pool. TDAT and TDEV devices must be removed from the pool before it can be deleted. Thin reclamation on storage level is also shown here.

Create Symmetrix thin devices
symcfg -sid 1234 list -datadev : list all TDAT devices
symcfg -sid 1234 list -tdev       : list all TDEV devices
symdev list -datadev -nonpooled -sid 1234 : Check which TDAT devices not in any thin pool
symconfigure -sid 1234 -cmd "create pool test type=thin;" commit -nop : create a thin pool
symconfigure -sid 1234 -cmd "add dev f3d::f43 to pool test type=thin, member_state=ENABLE;" commit  -nop : add TDAT to a thin pool
symcfg -sid 1234 list -pools -thin -mb : list all the thin pools
symcfg -sid 1234 show -pool test -thin -mb -detail -all  : show detail of "test" thin pool
symconfigure -sid 1234 -cmd "bind tdev f25:f28 to pool test;" commit -nop : bind TDEV to "test" thin pool
symmask -sid 1234 -dir 4a -p 0 -wwn $a add devs f25:f28 : present TDEV to DMX masking
symaccess -sid 1234 -name testing_sg -type storage add devs f25:f28 : present TDEV to VMAX storage group

Remove TDAT device in pool
symconfigure -sid 1234 -cmd "disable dev f3d:f3e in pool test, type=thin;" commit : disable TDAT
symconfigure -sid 1234 -cmd "remove dev f3d:f3e from pool test, type=thin;" commit -nop : remove TDAT

Unbind TDEV device in pool
symdev -sid 1234 not_ready -range f25:f28 : set TDEV device not ready
symconfigure -sid 1234 -cmd "unbind tdev f25:f28 from pool test;" commit -nop : remove TDEV from "test" pool

Rename thin pool
symconfigure -sid 1234 -cmd "rename pool test to test123 type = thin;" commit -nop; : rename thin pool

Delete thin pool
symconfigure -sid 1234 -cmd "delete pool test type=thin;" commit -nop : delete "test" thin pool

Thin Reclamation on Array level
symconfigure -sid 1234 -cmd "start free on tdev 68B:68E start_cyl=0 end_cyl=last_cyl type=reclaim;" commit -nop (to start reclaim)
symconfigure -sid 1234 -cmd "stop free on tdev 68B:68E type=reclaim;" commit -nop (to stop reclaim)

Monday, September 5, 2011

EMC Symmetrix SYMCLI Cheat sheet

This is some basic EMC Solution Enable commands to perform task on Symmetrix. Take note that symmask is used for DMX whereas symaccess is used for VMAX for device masking. Credit to http://www.sanduel.com/SAN-Storage-Commands/EMC-Symmetrix-SYMCLI-Cheat-sheet.html for this post.


symcfg
symcfg list : A brief description of the all connected Symmetrix boxes.
symcfg -sid 1234 list -lockn all : Lists all the external locks held in Symmetrix array 1234.
symcfg -sid 1234 -lockn 15 release -force : Release the lock 15 held on array 1234 .
symcfg -sid 1234 list -v : Displays detailed information about the Symmetrix Array 1234.
symcfg -sid 1234 -dir 4a -p 0 list -addr -avail : Lists the LUN information / availability of lun ids on port 4a:0 in array 1234 .
symcfg -sid 1234 list -rdfg all : List details about all the rdf groups in array.
symcfg -sid 1234 list -rdfg 3 : List details about rdf group 3 .
symcfg -sid 1234 list -rdfg all -dynamic : List details about all the dynamic rdf groups in array .
symcfg -sid 1234 list -rdfg all -static : List details about all the static rdf groups in array .
symcfg -sid 1234 list -ra all : List all RA ports with details like rdfg number , remote array sid and online status.
symcfg -sid 1234 list -connection : List all connections
symcfg -sid 1234 -application  list : List all application
symcfg discover : Scans all the devices in hosts looking for new symmetrix devices and rebuilds the symmetrix configuration database.

symconfigure
symconfigure -sid 1234 -f /tmp/map.txt commit : map LUNs to port 4a:0 in array 1234
symconfigure -sid 1234 -cmd "set dev 68B:68E attribute=SCSI3_persist_reserv;" commit -nop : enable SCSI-key
symconfigure -sid 1234 -cmd "set dev 68B:68E attribute=NO SCSI3_persist_reserv;" commit -nop : disable SCSI3-key
symconfigure -sid 1234 -cmd "set port 4a:0 SPC2_Protocol_Version=DISABLE;" commit -nop : disable SPC2 attribute

eg. /tmp/map.txt (map or unmap)
AIX/Linux/Solaris
============
map 041F:042E  to dir 4a:0 lun=128;
unmap 041F:042E  from dir 4a:0 lun=128;

HPUX
=====
map dev 087041F:042E  to dir 4a:0 vbus=A, target=01, lun=001;
unmap dev 041F:042E from dir 4a:0;

symdev
symdev -sid 1234 list : Lists all devices in symmetrix 1234.
symdev -sid 1234 list -noport : Lists the devices which are not mapped to any ports.
symdev -sid 1234 list -noport -meta : Lists all unmapped meta devices .
symdev -sid 1234 list -dynamic : Lists all devices whose dyn_rdf attribute set .
symdev -sid 1234 list -hotspare : Checks whether hotspare invoked in the array .
symdev -sid 1234 show ABC : show the detailed information about the devioce ABC.
symdev -sid 1234 write_disable ABC -SA all : Write disable the device ABC from through all directors.
symdev -sid 1234 not_ready ABC -SA all : Not ready the device ABC from through all directors.

symmaskdb
symmaskdb -sid 1234 -dev ABC list assign : List the masking details of the dev ABC .
symmaskdb -sid 1234 -wwn xxxxxxx list devs : List the devices masked to the given wwn number .

symmask
symmask list hba : List the HBA details of the host.
symmask -sid 1234 -dir 4a -p 0 list logins : List out wwn s logged through port 4a:0 .
symmask -sid 1234 refresh : Refresh the VCM Data Base after a masking and unmasking operation.
symmask -sid 1234 -wwn xxxx -dir 4a -p 0 add devs ABC,ABD : Mask the devices ABC and ABD to the given wwn in 1234 arrray .
symmask -sid 1234 -wwn xxxx -dir 4a -p 0 remove devs ABC,ABD : Unmask the devices ABC and ABD from the given wwn in 1234 arrray.

symdg
symdg -sid 1234 list : List the device groups which include the devices from array 1234.
symdg create mydg -type rdf1 : Create device group mydg of rdf1 type .
symdg show mydg : Shows the members/details of mydg.
symdg rename mydg yourdg : Renames the mydg to yourdg.
symdg delete mydg -force : Delete device group mydg.

symld
symld -g mydg -sid 1234 add dev ABC DEV006 : Add the RDF device ABC to device group mydg as DEV006
symld -g mydg remove DEV006 : Remove DEV006 form device group mydg.

symrdf
symrdf -sid 1234 -rdfg 3 -type rdf1 -file rdf.txt -g mydg createpair -establish : Establish the SRDF relation between the devices given in the file rdf.txt from array 1234(R1) and remote box according to the rdf group . This command start sync between R1 and R2, and also add these devices after creating the device group mydg.
symrdf -sid 1234 -rdfg 3 -file rdf.txt query : Query the Devices by using the device pair file.
symrdf -g mydg set mode acp_disk : Set syncing mode to Adaptive Copy.
symrdf -g mydg query : Query the device group.
symrdf -g mydg split : Split the dynamic srdf pair.
symrdf -sid 1234 -rdfg 3 -file rdf.txt deletepair.txt -force : Delete the srdf pairing between R1/R2 and return them to normal.

symdisk
symdisk -sid 1234 list -hotspare : Lists the Hotpsraes configured in the array.
symdisk -sid 1234 list -by_diskgroup : Dispalys all the disks in the array by grouping by disk groups.
symdisk -sid 1234 list -diskg_roup 1 : Dispalys all the disks in disk group 1.

symaccess
#### Creation / Deletion of SG
symaccess -sid 1234 -name test_sg -type storage create
symaccess -sid 1234 -name test_sg -type storage add devs XX,YY,AA:BB
symaccess -sid 1234 -name test_sg -type storage create dev XX,YY,AA:BB (create and add)
symaccess -sid 1234 -name test_sg -type storage delete -force

#### Creation / Deletion of PG
symaccess -sid 1234 -name test_pg -type port create
symaccess -sid 1234 -name test_pg -type port –dirport 8E:0,9E:0 add
symaccess -sid 1234 -name test_pg -type port create -dirport 8E:0,9E:0 (create and add)
symaccess -sid 1234 -name test_pg -type port delete -force

#### Creation / Deletion of IG
symaccess -sid 1234 -name test_ig -type initiator create
symaccess -sid 1234 -name test_ig -type initiator –wwn XXX add
symaccess -sid 1234 -name test_ig -type initiator –wwn YYY add
symaccess -sid 1234 -name test_ig -type initiator create -file IG_hpux_wwn.txt (create and add)
symaccess -sid 1234 -name test_ig -type initiator delete -force
symaccess -sid 1234 -name test_ig -type initiator set ig_flags on V -enable (set V bit for HP-UX)
eg. IG_hpux_wwn.txt
wwn:2101001b32300000

#### Creation / Deletion of Masking View
symaccess -sid 1234 create view –name test_view –storgrp test_sg –portgrp test_pg –initgrp test_ig
symaccess –sid 1234 delete view –name test_view

#### list devices/wwn in storage/initiator group
symaccess -sid 1234 list dev C8B -type storage
symaccess -sid 1234 list dev C8B:C9B -type storage
symaccess  -sid 1234 list -wwn 2101001b323f528d -type initiator

AIX VCS LVMVG SG Creation

This example shows the steps to allow the DMP to use as multipathing software for VCS LVMVG SG on AIX.

Step 1: Ensure Enclosure Based Name (EBN) name is used and dmp_native_support is ON
vxddladm get namingscheme
vxddladm set namingscheme=ebn [persistence={yes|no}] [use_avid=yes|no] [lowercase=yes|no]
vxdmpadm gettune |grep dmp_native_support
vxdmpadm settune dmp_native_support=on

Step 2: Remove device from from VxVm and create LVM VG on node1
vxdisk rm emc0_25
chpv -C emc0_25 (clear PVID if any)
mkvg -f -y vmaxlvmvg emc0_25
mklv -t jfs2log vmaxlvmvg 1
mkfs -o log=/dev/loglv01 -V jfs2 /dev/vmax_lvm
mklv -t jfs2 -y vmaxlvmlv vmaxlvmvg 1G
crfs -v jfs -m <mount point> -d <logical volume>
lspv |grep vmaxlvmvg (identify PVID)

Step3:  set physical volume (pv) to the other node2
chdev -l emc1_25 -a pv=yes
lsattr -El emc1_25

Step 4: Export/Import VG on both cluster node
exportvg vmaxlvmvg (node1)
lvlstmajor
varyoffvg vmaxlvmvg
importvg -V major_number -y vmaxlvmvg emc0_25
varyoffvg vmaxlvmvg

lvlstmajor (node 2)
importvg -V major_number -y vmaxlvmvg emc1_25
varyoffvg vmaxlvmvg



If there's any problem bring online under VCS, refer to http://www.symantec.com/business/support/index?page=content&id=TECH70949 for troubeshooting.

Note: If third party driver (TDPmode) eg. PowerPath is used for multipathing. Step 1 can be skipped and PP pseudo name is normally used for VG creation, ie mkvg -y vmaxlvmvg hdiskpower25. Refer to http://sfdoccentral.symantec.com/sf/5.1SP1PR1/aix/pdf/dmp_admin_51sp1pr1_aix.pdf page 154 for more details.

VCS SG Linux LVM failure due to all paths disabled

There is a limitation on VCS LVMVolumeGroup agent and Linux LVM causing SG LVM SG cannot failover to other node when there's all path disabled on the active node. It is a known issue which is documented.

Refer to VCS release notes https://sort.symantec.com/public/documents/sfha/5.1sp1pr2/linux/productguides/pdf/vcs_notes_51sp1pr2_lin.pdf page 43 for more details.

Thursday, September 1, 2011

Oracle RAC 11gR2 Issues

This post is some of the oracle articles I found in the Oracle support knowledge base website during my Oracle RAC 11gR2 installation. You might find them useful if you encounter similar issue. Take note a login account is required to access the website.

Article ID
CLUVFY Fails with TCP Check PRVF-7617 Due to Case of Node Names [ID 1286394.1]
Troubleshooting 11.2 Grid Infastructure Installation Root.sh Issues [ID 1053970.1]
CRS Does not Start after Node Reboot in 11gR2 Grid Infrastructure [ID 1215893.1]
Grid Infrastructure Installation: root.sh Stalls Due to Improper localhost Setting [ID 1155903.1]
Recommendation for the Real Application Cluster Interconnect and Jumbo Frames [ID 341788.1]

Oracle Cluster Verification Utility

Oracle provides Cluster node verification prior to installation. It is very helpful for Oracle RAC pre-installation to check the nodes meeting the system minimum requirement, the network interfaces availability, the patches update and etc.

Oracle Grid pre-installation
To verify whether your system meets all of the criteria for an Oracle Clusterware installation:
Window:
cluvfy stage -pre crsinst -n node1,node2,node3
Unix:
runcluvfy.sh stage -pre crsinst -n node1,node2,node3

After you have completed phase one, verify that Oracle Clusterware is functioning properly before proceeding with phase two of your Oracle RAC installation:
Window:
cluvfy stage -post crsinst -n node1,node2,node3 -v
Unix:
runcluvfy.sh stage -post crsinst -n node1,node2,node3 -v

To verify whether your system meets all of the criteria for an Oracle RAC installation:
Window:
cluvfy stage -pre dbinst -n node1,node2,node3 -v 
Unix:


Oracle Database pre-installation
To verify whether your system meets all of the criteria for creating a database or for making a database configuration change:
Window:
cluvfy stage -pre dbcfg -n -n node1,node2,node3 -d oracle_home -v
Unix:
runcluvfy.sh stage -pre dbcfg -n -n node1,node2,node3 -d oracle_home -v

Refer to Cluster Verification Utility Reference for more detail explanation.

runcluvfy.sh stage -pre dbinst -n node1,node2,node3 -v 

Veritas DMP HP-UX native LVM configuration

HP-UX native LVM is supported with Veritas DMP for multipathing. dmp_native_support parameter must be turned on and Enclosure Base Name (EBN) scheming must be used in order for DMP to manage LVM.

#vxddladm get namingscheme
#vxddladm set namingscheme=ebn persistence=yes

#vxdmpadm gettune
#vxdmpadm settune dmp_native_support=on

On HP-UX 11.31 1003IC, the pvcreate Utility fails to create Physical
volumes with DMP devices. As a work around, use corresponding native multipathing (nMP metanode/OS path) to create physical volume using pvcreate Utility.

Instead of "pvcreate -f /dev/vx/dmp/emc_clariion0_1", "pvcreate -f /dev/rdisk/disk110" should be used.

Wednesday, August 31, 2011

Veritas Thin Reclamation on EMC Storage

Here is some simple steps to do thin reclamation with Veritas Volume Manager. That's provided that all the necessary environments have been done such as host/storage/vxvm.

Refer to VxVM version and Array firmware requirements.

Step 1: Ensure the device TYPE is thinrclm
#vxdisk -o thin list
DEVICE          SIZE(mb)     PHYS_ALLOC(mb)  GROUP           TYPE     
emc_clariion0_63 102400       N/A             CX_thin     thinrclm 
emc0_1766       1031         N/A              VMAX_thin thinrclm
emc0_1767       1031         N/A              VMAX_thin thinrclm
emc0_1768       1031         N/A              VMAX_thin thinrclm
emc0_1760       1031         N/A              VMAX_thin thinrclm

Step 2: Enable the dmp write log to syslog (if you want to view the log)
#vxdmpadm settune dmp_log_level=3
write_same with offset and length

Step 3: Perform Thin Reclamation via:
#vxdisk reclaim CX_thin
Reclaiming thin storage on:
Disk emc_clariion0_63 : Done.

#vxdisk reclaim VMAX_thin
Reclaiming thin storage on:
Disk emc0_1768 : Done.
Disk emc0_1769 : Done.
Disk emc0_1766 : Done.
Disk emc0_1767 : Done.

VCS file system mount failure due to timeout

If there're many mount points under VCS control, increasing OnlineTimeout value for mount resource will prevent VCS mounting resource failure due to the time constraint during switch over or failover. Refer to Modifying mount resource attributes for more detail.

The commands are:
/opt/VRTS/bin/haconf -makerw
/opt/VRTS/bin/hatype -modify Mount OnlineTimeout 600
/opt/VRTS/bin/haconf –dump


Error grabbed from engine_A.log
2010/08/10 15:21:24 VCS INFO V-16-2-13078 (node2) Resource(vcsfs1) - clean completed successfully after 1 failed attempts.
2010/08/10 15:21:24 VCS INFO V-16-2-13071 (node2) Resource(vcsfs1): reached OnlineRetryLimit(0).
2010/08/10 15:21:24 VCS ERROR V-16-1-10303 Resource vcsfs1 (Owner: Unspecified, Group: vcs) is FAULTED (timed out) on sys node2

SF Oracle RAC (SFRAC) Service Group go into partial state after paths failure

When there are paths failure on SFRAC cluster node1 causing the node2 and/or node3 Service Group goes into partial state. This is due to the default disk detach policy set as global in the share diskgroup. "vxdg" can be used to view the setting and "vxedit" can be used to change the setting. Refer to What is the disk detach policy for shared disk groups and how can it be changed? for more detail.

#vxdg list sfracdg
#vxedit -g sfracdg set diskdetpolicy=local sfracdg

# hastatus -sum
-- SYSTEM STATE
-- System               State                Frozen
A  node1           RUNNING              0
A  node2           RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State
B  cvm             node1           Y          N   ONLINE
B  cvm             node2           Y          N   ONLINE
B  sfrac           node1           Y          N   PARTIAL
B  sfrac           node2           Y          N   PARTIAL

VCS fails to go running state on HP-UX 11.31 fusion

VCS fails to go to the running state on HP-UX 11.31 with March 2011 release

Due to a regression caused by the patch PHKL_41700 (QXCR1001078659) that went into HP-UX 11.31 March 2011 release, the select() call takes long time to return from 'timeout sleep'. Due to this, _had misses the heartbeat with GAB resulting in SIGABRT by GAB. [2287383]

Workaround: You must tune 'hires_timeout_enable' kernel parameter to 1 before starting the cluster. Run the following command to set this variable to 1:

# kctune hires_timeout_enable=1

Refer to VCS fails to go to the running state on HP-UX 11.31 with March 2011 release for detail

VCS Linux LVM Resource using PowerPath as third party driver multipathing

If PowerPath is being used as multipathing software for VCS LVMVolumeGroup Resource, PowerPath Pseudo name must be used for PV creation and native OS path name must be filtered for LVM as shown in RED below. Otherwise, LVM will not failover properly for device path failure causing VCS LVM Service Group failing as well.

# more /etc/lvm/lvm.conf |grep filter |grep -v "#"
    filter = [ "r|^/dev/(sda)[0-9]*$|", "r|^/dev/(sda)[a-z]*$|", "r|^/dev/(sdc)[a-z]*$|", "r|/dev/vx/dmp/.*|", "r|/dev/block/.*|", "r|/dev/VxDMP.*|", "r|/dev/vx/dmpconfig|", "r|/dev/vx/rdmp/.*|", "r|/dev/dm-[0-9]*|", "r|/dev/mpath/mpath[0-9]*|", "r|/dev/mapper/mpath[0-9]*|", "r|/dev/disk/.*|","r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|", "a/.*/" ]

# powermt display dev=emcpowerar |egrep "sda|sdc"
   6 qla2xxx                  sdaf      FA  8fB   active  alive       0      0
   5 qla2xxx                  sdcb      FA  7fB   active  alive       0      0
# powermt display dev=emcpoweras |egrep "sda|sdc"
   6 qla2xxx                  sdae      FA  8fB   active  alive       0      0
   5 qla2xxx                  sdca      FA  7fB   active  alive       0      0
# powermt display dev=emcpoweraq |egrep "sda|sdc"
   6 qla2xxx                  sdag      FA  8fB   active  alive       0      0
   5 qla2xxx                  sdcc      FA  7fB   active  alive       0      0

#vgdispay -v vmax

--- Volume group ---
  VG Name               vmax
  System ID            
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  58
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               3.01 GB
  PE Size               4.00 MB
  Total PE              771
  Alloc PE / Size       700 / 2.73 GB
  Free  PE / Size       71 / 284.00 MB
  VG UUID               buYhkI-iIHB-ppPr-sgar-DlI8-6KeK-bweUd0
  
  --- Logical volume ---
  LV Name                /dev/vmax/vmaxvol
  VG Name                vmax
  LV UUID                ip0WHa-nCgP-wNbe-61yT-3T7d-LHoq-aVTLod
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                2.73 GB
  Current LE             700
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:0
  
  --- Physical volumes ---
  PV Name               /dev/emcpoweraq    
  PV UUID               xFoNsX-47n2-mBeL-UkxM-iZ5p-lIcw-xJndJV
  PV Status             allocatable
  Total PE / Free PE    257 / 0
  
  PV Name               /dev/sdcb     (improper LVM filtering)
  PV UUID               UV5ip7-T73S-Hbnp-w6bM-CrIs-7tYj-ti5g9C
  PV Status             allocatable
  Total PE / Free PE    257 / 0
  
  PV Name               /dev/sdca      (improper LVM filtering)
  PV UUID               lsYJmT-Etak-Gz27-9yG1-Hzxv-EP7T-G48ZyJ
  PV Status             allocatable
  Total PE / Free PE    257 / 71

# vgscan
  Reading all physical volumes.  This may take a while...
  Found duplicate PV 7WcanQm06zYfK0TdUPo7ABQy0cR4AHRN: using /dev/emcpowerar not /dev/vx/dmp/pp_emc0_9
  Found duplicate PV P3gU7FTqwVpOzGmep2cf2yILcqxFlvHX: using /dev/emcpoweras not /dev/vx/dmp/pp_emc0_8
  Found duplicate PV BRnJgGZrMwliZFSmnCDOgH54HCIaIXKX: using /dev/vx/dmp/pp_emc0_10 not /dev/emcpoweraq  (improper LVM filtering)
  Found volume group "vmax" using metadata type lvm2