Archive for March, 2012

restore or recovery backup files catalog from symantec netbackup

March 31st, 2012 No comments

1.How to restore files/catalog from symantec netbackup

On the Client, here we take testserver_bak as example
cat /usr/openv/netbackup/bp.conf
SERVER = testmedia1
SERVER = testmedia2
SERVER = testmedia3
CLIENT_NAME = testserver_bak

Then Login to the media server testmedia1, and check the backup.
bplist -C testserver_bak -b -l /export/home/zones/testserver_bak/root/export/home/oracle/
-rwxr--r-- oracle dba 1213 Feb 13 18:08 /export/home/zones/testserver_bak/root/export/home/oracle/
-rwxr--r-- oracle dba 1213 Feb 06 18:40 /export/home/zones/testserver_bak/root/export/home/oracle/

To restore file/folder from netbackup server side:
bprestore -D testserver_bak -C testserver_bak -L /tmp/restore_log /export/home/zones/testserver_bak/root/export/home/oracle/

To restore file/folder from client side:
/usr/openv/netbackup/bin/bprestore -L /tmp/restor_log /export/home/zones/testserver_bak/root/export/home/oracle/

You even can specify exact dates for restoration (to restore data from specific date), -s means start date and -e - end date:
/usr/openv/netbackup/bin/bprestore -s 03/13/2012 -e 03/13/2012 -L /tmp/restor_log /apps/kua/
Successful start of restore procedure can be verified with /tmp/restor_log
Restore started 03/27/2012 12:41:33
12:41:40 ( Restore job id 3082842 will require 1 image.
12:41:40 ( Media id A06957 is needed for the restore.
12:42:46 (3082842.001) Restoring from image created Tue Mar 13 16:29:32 2012
12:45:13 (3082842.001) TAR STARTED
12:45:13 (3082842.001) INF - If Media id A06957 is not in a robotic library administrative interaction may be required to satisfy this mount request.
12:47:07 (3082842.001) INF - Waiting for mount of media id A06957 on server testmedia1 for reading.
12:48:44 (3082842.001) INF - Waiting for positioning of media id A06957 on server testmedia1 for reading.
12:50:12 (3082842.001) INF - Beginning restore from server testmedia1 to client testserver_bak.

2.Some other useful symantec netbackup commands

Find the correct backup images

bpimagelist -U -client <CLIENT> -d <STARTDATE> -e <ENDDATE>

Find the media used for those images

bpimagelist -U -client <CLIENT> -d <STARTDATE> -e <ENDDATE> -media

testserver:root root # bpimagelist -U -client testserver_bak -d 03/01/2012 -e 03/23/2012
Backed Up Expires Files KB C Sched Type Policy
---------------- ---------- -------- -------- - ------------ ------------
03/22/2012 16:18 04/05/2012 146 116419 N Differential Linux_FS_TUE
03/21/2012 16:01 04/04/2012 150 115169 N Differential Linux_FS_TUE
03/20/2012 16:23 05/21/2012 146441 3834626 N Full Backup Linux_FS_TUE
03/19/2012 16:11 04/02/2012 127 111931 N Differential Linux_FS_TUE
03/18/2012 16:08 04/01/2012 143 113053 N Differential Linux_FS_TUE
03/17/2012 16:08 03/31/2012 128 113245 N Differential Linux_FS_TUE
03/16/2012 16:16 03/30/2012 1593 239091 N Differential Linux_FS_TUE
03/15/2012 16:16 03/29/2012 340 150502 N Differential Linux_FS_TUE
03/14/2012 16:13 03/28/2012 135 114239 N Differential Linux_FS_TUE
03/13/2012 16:29 05/14/2012 238178 13223523 N Full Backup Linux_FS_TUE
03/12/2012 16:08 03/26/2012 119 112315 N Differential Linux_FS_TUE
03/11/2012 16:08 03/25/2012 130 111193 N Differential Linux_FS_TUE
03/10/2012 16:13 03/24/2012 121 112892 N Differential Linux_FS_TUE
03/09/2012 16:20 03/23/2012 125 111418 N Differential Linux_FS_TUE
03/06/2012 16:23 05/14/2013 238164 13208390 N Full Backup Linux_FS_TUE

testserver:root root # bpimagelist -U -client testserver_bak -d 03/13/2012 -e 03/13/2012 -media
Media ID Last Written Server
-------- ---------------- ----------
A06957 03/13/2012 16:29 testmedia1
Checking Drive Usage

You can verify tape drives usage on netbackup server with this command

Check the Netbackup jobs.
bpdbjobs -report | grep -i testserver_bak

Check job status
bpdbjobs -report -jobid 6813593
bpdbjobs -most_columns > /tmp/bpdbjobs.out

Check active jobs for node
bpdbjobs | grep testserver_bak | grep -i active

Check backup policy
bppllist -byclient testserver_bak_bak

How to check needed tape is in library
When tape is in library
bash-3.00$ /usr/openv/volmgr/bin/vmquery -m A06957
media ID: A06957
media type: 1/2" cartridge tape 3 (24)
barcode: A06957L3
media description: --
volume pool: Onsite (4)
robot type: TLD - Tape Library DLT
robot number: 0
robot slot: 374
robot control host: testserver
volume group: 000_00000_TLD
vault name: Vault
vault sent date: Wed Mar 14 11:08:26 2012
vault return date: ---
vault slot: 5562
vault session id: 1508
vault container id: -
created: Fri Apr 30 10:56:25 2010
assigned: Sat Mar 10 23:00:13 2012
last mounted: Tue Mar 13 11:55:07 2012
first mount: Sat May 01 02:01:57 2010
expiration date: ---
number of mounts: 135
max mounts allowed: ---
status: 0x0

When tape is not in library
bash-3.00$ /usr/openv/volmgr/bin/vmquery -m A00006
media ID: A00006
media type: 1/2" cartridge tape 3 (24)
barcode: A00006L3
media description: --
volume pool: Onsite (4)
robot type: NONE - Not Robotic
volume group: Vaulted_Offsite
vault name: Vault
vault sent date: Mon Feb 06 11:36:59 2012
vault return date: ---
vault slot: 149
vault session id: 1480
vault container id: -
created: Wed Feb 27 13:54:54 2008
assigned: Thu Feb 02 18:18:14 2012
last mounted: Thu Feb 02 21:31:14 2012
first mount: Mon Mar 03 22:23:07 2008
expiration date: ---
number of mounts: 971
max mounts allowed: ---
status: 0x0

You can find more info here

Categories: Hardware, Storage Tags:

resolved linux check qlogic/emulex hba firmware version and model name type howto

March 28th, 2012 1 comment

Here's a script to check linux hba model name, firmware version:

for SCSI in `ls -d /sys/class/scsi_host/host*`;
[ -e ${SCSI}/modelname ] && echo -n 'Model Name ' && cat ${SCSI}/modelname;
[ -e ${SCSI}/model_name ] && echo -n 'Model Name ' && cat ${SCSI}/model_name;
[ -e ${SCSI}/fwrev ] && echo -n 'Firmware Version ' && cat ${SCSI}/fwrev;
[ -e ${SCSI}/fw_version ] && echo -n 'Firmware Version ' && cat ${SCSI}/fw_version;

Also here's a script for you if you're checking bunches of servers:

for i in `cat /home/doxer/servers_list_linux`
echo "">/root/.ssh/known_hosts
expect <<EOF
spawn ssh -l${USERNAME} -p22 $i "echo -n '====';hostname;cat /sys/class/scsi_host/host*/{modelname,model_name,fwrev,fw_version}"
set timeout 20
expect "*yes*"
send "yes\r"
expect "assword:"
send "${PASSWORD}\r"
expect eof

For solaris(solaris10), please refer to the following:

for i in `cat servers_list_solaris`
echo "">/root/.ssh/known_hosts
expect <<EOF
spawn ssh -l${USERNAME} -p22 $i "hostname;echo 'result from prtdiag -v';/usr/sbin/prtdiag -v|grep PCI;echo 'result from cfgadm -la|grep fabric';/usr/sbin/cfgadm -la|grep fabric;echo 'result from fcinfo hba-port';/usr/sbin/fcinfo hba-port -l|egrep 'OS Device Name|HBA Port WWN|Manufacturer|Firmware';echo 'result from prtpicl -v -c scsi-fcp';/usr/sbin/prtpicl -v -c scsi-fcp|egrep 'version|name'"
set timeout 20
expect "*yes*"
send "yes\r"
expect "assword:"
send "${PASSWORD}\r"
expect eof

For solaris 9:

for i in `cat servers_list_solaris`
echo "">/root/.ssh/known_hosts
expect <<EOF
spawn ssh -l${USERNAME} -p22 $i "hostname;echo 'result from prtdiag -v';/usr/sbin/prtdiag -v|grep PCI;echo 'result from cfgadm -la|grep fabric';/usr/sbin/cfgadm -la|grep fabric;echo 'result from prtpicl -v -c scsi-fcp';/usr/sbin/prtpicl -v -c scsi-fcp|egrep 'version|name'"
set timeout 20
expect "*yes*"
send "yes\r"
expect "assword:"
send "${PASSWORD}\r"
expect eof

And if you found the model is like 375-3102-xx or so, then you can just search it in google, you'll find this one is qlogic X6767A. If the result is like "driver-name   qlc", "version       ISP2312 Host Adapter fcode version 1.16 11/15/06", then you'll know it's HP ISP2312 and firmware version is 1.16.

do not run ifconfig -a IPADDR on solaris or loopback ip address will become weird

March 28th, 2012 No comments

If you run ifconfig -a an ip address) on solaris box, you'll find it will give error message:

ifconfig: SIOCSLIFADDR: lo0: Cannot assign requested address

But actually, this command will set the loopback ip address to the one you designated with your command, which means the loopback address will become the ip address other than This is very bad and many weird problems will occur later on, for example, your ssh connection will drop soon after that.


I've also tested it on Linux box and linux will not set loopback ip address to the one designated with the command, but will give error:

error fetching interface information: Device not found

Anyway, do not run ifconfig -a IPADDR on both linux and solaris, or you'll find your OS in trouble.

Categories: IT Architecture, Linux, Systems Tags:

resolved Disk group has no valid configuration copies

March 27th, 2012 No comments

We met this error message when trying to import testDG:

VxVM vxdg ERROR V-5-1-10978 Disk group testDG: import failed:
Disk group has no valid configuration copies

Here's the disks belongs to testDG:
root@doxer# vxdisk -eo alldgs list|grep testDG
emc0_0ccb auto:cdsdisk emc0_0ecc testDG online clone_disk sdlw srdf-r2
emc0_0c9d auto:cdsdisk emc00 testDG online clone_disk sdlr srdf-r2
emc0_2abf auto:cdsdisk emc03 testDG online clone_disk sdlo srdf-r2
emc0_2ab7 auto:cdsdisk emc02 testDG online clone_disk sdlj srdf-r2
emc0_2ac7 auto:cdsdisk emc01 testDG online clone_disk sdlp srdf-r2
emc0_2a07 auto:cdsdisk emc04 testDG online clone_disk sdky srdf-r2
emc0_2a17 auto:cdsdisk emc05 testDG online clone_disk sdlc srdf-r2

And after checking disks' configuration, we found that some disks were in disabled status:
root@doxer# vxdg list testDG
Group: testDG
dgid: 1324379725.160.doxer
import-id: 1024.132
flags: cds
version: 140
alignment: 8192 (bytes)
ssb: on
autotagging: on
detach-policy: global
dg-fail-policy: dgdisable
copies: nconfig=default nlog=default
config: seqno=0.1266 permlen=51360 free=51333 templen=13 loglen=4096
config disk emc0_0ccb copy 1 len=51360 disabled
config disk emc0_0c9d copy 1 len=51360 state=clean online
config disk emc0_2abf copy 1 len=51360 state=clean online
config disk emc0_2ab7 copy 1 len=51360 state=clean online
config disk emc0_2ac7 copy 1 len=51360 state=clean online
config disk emc0_2a07 copy 1 len=51360 state=clean online
config disk emc0_2a17 copy 1 len=51360 disabled
log disk emc0_0ccb copy 1 len=4096 disabled
log disk emc0_0c9d copy 1 len=4096
log disk emc0_2abf copy 1 len=4096
log disk emc0_2ab7 copy 1 len=4096
log disk emc0_2ac7 copy 1 len=4096
log disk emc0_2a07 copy 1 len=4096
log disk emc0_2a17 copy 1 len=4096 disabled

We may re-initialise config copies on two failing disks from known good state but it was quite risky. Actually, this is a bug in this version of Veritas, whereby it identifies some disks as clones, and some as normal disks. By default, Veritas would refuse to import such “mixed” configuration – it only allows all-normal or all-clones DGs. A work-around is to specify the “-o useclonedev=off” flag or clear the “clone_disk” flag from the disks, i.e. vxdisk set $disk clone=off. After that, we can import the DG without failure. (This is only a workaround though, and furthurmore, we need upgrade Veritas Storage Foundation, and fix the disabled config replicas.)

The assumption is that it’s some sort of bug related to the veritas array library that causes the state of the clone flag to be updated in some cases when an srdf failover is initiated. The inconsistent state of the disk group probably arose because of the inconsistent srdf state of the symdg before the failover. This looks to have been caused by someone adding a new disk to the symdg which was not in a synchronised state. This is similar to what we saw yesterday which also had a disk added to it’s config in an srdf split state and I had to run an establish.

Categories: Hardware, Storage Tags:

general bash tips

March 25th, 2012 No comments in & :<<EOF
echo "DATABASE?"
select yn in "Yes" "No"
case $yn in
read -p "please type the host,user,pwd,db,use space to separate" host user pwd db
mysqldump -h $host -u $user -p$pwd --default-character-set=utf8 $db>/root/$wwwname.sql
echo "no database"

2.for in do done
for i in /etc/profile.d/*.sh; do
if [ -r "$i" ]; then
. $i #execute it
done enter to continue
echo "Vivek Gite"
read -p "Press [Enter] key to continue..."

4.functions && unset -f #remove function;declare -f list,declare -F <function name only>
echo "usage:`basename $0` start|stop process name"
if [ $# -ne 2 ]
exit 1
case $OPT in
start|Start) echo "Starting..$PROCESSID"
stop|Stop) echo "Stopping..$PROCESSID"
5.while do done shift
while [ $# -ne 0 ];#space
echo $1
done && $IFS && readarray[mapfile] && source[.]
# Reading lines in /etc/fstab.
read line1
read line2
} < $File
echo "First line in $File is:"
echo "$line1"
echo "Second line in $File is:"
echo "$line2"

read -s -t 10 -p "hello" domain_name #timeout 10s;-s not showing input, like passwd
whois $domain_name

read -r login password uid gid info home shell <<< "$pwd" #-r,Remove any current binding for keyseq, <<<;<< will return and ask for input,-r, do not escape \
printf "Your login name is %s, uid %d, gid %d, home dir set to %s with %s as login shell\n" $login $uid $gid $home $shell

7.declare && array
declare -i x=10 #-a,-A,-f,-r,-x;参见Bash Builtin Commands > declare
declare -i y=10
declare -i z=0
z=$(( x + y ))
echo "$x + $y = $z"

declare -a hello=("hello world" world2)
echo ${hello[1]} #world2
echo ${hello[@]} #world world2;${name[*]}
echo ${#hello[1]} #length,6
echo ${#hello[*]} #2

declare -A hellos
hellos=([a]=hello1 [b]=hello2 [c]=hello3)
echo ${hellos[b]} #hello2

8.break && continue
# set an infinite while loop
while :
read -p "Enter number ( -9999 to exit ) : " n

# break while loop if input is -9999
[ $n -eq -9999 ] && { echo "Bye!"; break; }

isEvenNo=$(( $n % 2 )) # get modules
[ $isEvenNo -eq 0 ] && echo "$n is an even number." || echo "$n is an odd number."


for i in something
while true
[ condition ] && break 2 #The above break 2 will breaks you out of two enclosing for and while loop.

# Genarate a number (random number) between 1 and 10
r=$(( $RANDOM%10+0 )) #$RANDOM,0~32767

# Quotes author name
author="\t --Bhagavad Gita."

# Store cookies or quotes in an array
array=( "Neither in this world nor elsewhere is there any happiness in store for him who always doubts."
"Hell has three gates: lust, anger, and greed."
"Sever the ignorant doubt in your heart with the sword of self-knowledge. Observe your discipline. Arise."
"Delusion arises from anger. The mind is bewildered by delusion. Reasoning is destroyed when the mind is bewildered. One falls down when reasoning is destroyed."
"One gradually attains tranquillity of mind by keeping the mind fully absorbed in the Self by means of a well-trained intellect, and thinking of nothing else."
"The power of God is with you at all times; through the activities of mind, senses, breathing, and emotions; and is constantly doing all the work using you as a mere instrument."
"One who has control over the mind is tranquil in heat and cold, in pleasure and pain, and in honor and dishonor; and is ever steadfast with the Supreme Self"
"The wise sees knowledge and action as one; they see truly."
"The mind acts like an enemy for those who do not control it."
"Perform your obligatory duty, because action is indeed better than inaction." )

# Display a random message
echo ${array[$r]}
echo -e "$author"

11。Redirecting Standard Output and Standard Error
&>word #same as >word 2>&1
12.Here Documents && Here Strings debugging
sh -n ./ #for checking syntax
sh -x ./ #bash debugging

14.expr && let && (( ))
expr 5 + 3 #8
let z=5+3 #$z=8
let z+=3
let a=5**2 #25
z=$(( 5 + 3 ))
z=$(( z + 4 )) #12
LOOP=10;expr $LOOP '-' 10
echo $?
a=6;expr '(' $a '&' 4 ')' + 5 #11,| &
b=beijing;expr $b = beijing #1
expr 5 '<' 10 #1
LOOP=0;LOOP=`expr $LOOP +1 `
expr length 'hell' #4
expr index 'hello,world' 'o' #5
expr substr 'hello,world' 2 5 #start,length
expr 123458akd : '[0-9]*' #6, return number of matched character
expr accounts.doc : '\(.*\).doc' #accounts

while :
# Same as:
# while true
# do
# ...
# done

17.() {} []
echo \"{These,words,are,quoted}\" #"These" "words" "are" "quoted"

( a=321; ) #cannot read variables created in the child process
echo "a = $a" # a = 123

Array=(element1 element2 element3) #array initialization.
echo {a..z} # a b c d e f g h i j k l m n o p q r s t u v w x y z

echo $[5+3] #8

echo "Updating 'locate' database..."
echo "This may take a while."
updatedb /usr & # Must be run as root.
# Don't run the rest of the script until 'updatedb' finished.
# You want the the database updated before looking up the file name.
locate $1

22.time bash
#!/bin/bash #parallel

for ((i=0;i<5;i++));do
sleep 3;echo 1>>aa && echo "done!"
} &
cat aa|wc -l
rm aa

Categories: IT Architecture, Programming, SHELL Tags:

autossh sshldap

March 25th, 2012 No comments

1. create /root/

#!/usr/bin/expect --
# SimpleSSHProxy
# Created by ivan on 10-9-10.
# This is another wheel of autossh built using expect.
# If you have to use ssh password authorizing,
# use this instead of autossh.
# WARNING: This script is NOT SAFE for your password,
# Usage: [foo] [foo] [bindingIP:port] [foo] [username] [remoteHost] [password]
# get arguments
set username [lrange $argv 0 0]
set remoteHost [lrange $argv 1 1]
set password [lrange $argv 2 2]

while (1) {
set connectedFlag 0;
spawn /usr/bin/ssh $username@$remoteHost;
match_max 100000;
set timeout 60;
expect {
"?sh: Error*"
{ puts "CONNECTION_ERROR"; exit; }
{ send "yes\r"; exp_continue; }
"*?assword:*" {
send "$password\r"; set timeout 4;
expect "*?assword:*" { send "$password\r"; set timeout 4; }
set connectedFlag 1;
# if no password
{ send "echo hello\r"; set connectedFlag 1; }
if { $connectedFlag == 0 } {
puts "SSH server unavailable, retrying...";

while (1) {
set conAliveFlag 0;
interact {
# time interval for checking connection
timeout 600 {
set timeout 30;
send "echo hello\r";
expect "*hello*" { set conAliveFlag 1; }
if { $conAliveFlag == 1 } {
# connection is alive
} else { break; }

puts "SSH connection failed, restarting...";

2. create

/root/ <replace with your SSH username>$1 <replace with your SSH password> 2>/dev/null

3. run <SSH hostname>

add new LUN from storage array

March 23rd, 2012 No comments

Prepare Works

freeze cluster if needed
ioscan -fknC disk > /var/tmp/ioscan_before 
syminq -pdevfile > /var/tmp/syminq_before 
vxdisk -o alldgs list > /var/tmp/alldgs_before 
vxdisk -e list > /var/tmp/vxdisk_before 
vxprint -Aht > /var/tmp/vxprint_before 
bdf -l > /var/tmp/bdf_before 

Scan LUNs

# ioscan -fnC disk --> you won't see the device after ioscan, we need insf to create them

-C class Restrict the output listing to those devices belonging to the specified class.
-n List device file names in the output.
-k Scan kernel I/O system data structures instead of the actual hardware and list the results.
No binding or unbinding of drivers is performed.
-f Generate a full listing, displaying the module's class, instance number, hardware path,driver, software state
hardware type,and a brief description.

# insf -e -C disk --> force device creation,after this i can see the new LUN
After above commands, you should be able to see the new LUNs using syminq command.

Compare the output , identify new LUNs

After Scan, you should be able to see new LUNs using syminq # syminq -pdevfile | grep -i 29ee 

000290101358 /dev/rdsk/c22t14d2 29EE 9C 0
000290101358 /dev/rdsk/c23t14d2 29EE 8B 1

At this stage, sympd list and vxdisk -e list stays unchanged.
Now we run symcfg disco, after this command, symdb will be updated,and we will see the new LUN in sympd command
# symcfg disco
This operation may take up to a few minutes. Please be patient...
# sympd list | grep -i 29ee 

/dev/rdsk/c22t14d2 29EE 09C:0 15B:C21 RAID-5 N/Grp'd (M) RW 207360
/dev/rdsk/c23t14d2 29EE 08B:1 15B:C21 RAID-5 N/Grp'd (M) RW 207360

Now, we will run vxdctl enable to let vxdisk command see the new LUN
# vxdctl enable 
# vxdisk -e list 

EMC1_41 auto - - online c22t14d2

Next, we need to run symcfg disco command again to let sympd command see the dmp device.
# symcfg disco
This operation may take up to a few minutes. Please be patient...

# sympd list | grep -i 29ee 

/dev/rdsk/c22t14d2 29EE 09C:0 15B:C21 RAID-5 N/Grp'd (M) RW 207360
/dev/rdsk/c23t14d2 29EE 08B:1 15B:C21 RAID-5 N/Grp'd (M) RW 207360
/dev/vx/rdmp/EMC1_41 29EE 09C:0 15B:C21 RAID-5 N/Grp'd (M) RW 207360

Add new LUNs to Volume

Initialize the New LUN

before initialize:
# vxdisk -o alldgs list | grep EMC1_41 

EMC1_41 auto:none - - online invalid

# /etc/vx/bin/vxdisksetup -i EMC1_41

after initialize
# vxdisk -o alldgs list | grep EMC1_41

EMC1_41 auto:cdsdisk - - online

Add disk to VxVM control

# vxdg -g obudwdbarc_PRD adddisk emc29EE=EMC1_41
Note: name the disk with its LUN number is bad practice, because if we fail over to the DR site, emc29EE will refer to it's R2 device which hyper name is not 29EE.

Extent FS

# vxassist -g obudwdbarc_PRD maxgrow obudwdbprd-ora-archivelog01 alloc=emc29EE 

Volume obudwdbprd-ora-archivelog01 can be extended by 212327424 to: 318490752 (311026Mb+128 sectors)

# /etc/vx/bin/vxresize -g obudwdbarc_PRD -F vxfs -bx obudwdbprd-ora-archivelog01 +212327424 alloc=emc29EE 


Prepare Works

install sg3_utils (all nodes)
# rpm -ivh *.rpm 

in case of shared filesystems growth, pls use two commands below

# vxdctl -c mode --> check who`s the master node, don't scan master node first

# fsclustadm -v showprimary /testprd/ora ---> determine that the first scan is not on either filesystem master
# fsclustadm -v showprimary /testprd/ora 

before we scan, please make a note of syminq/vxdisk list/sympd list
# syminq > /tmp/syminq.before 

Scan LUNs

discover storage on o/s layer (all nodes)

after this , we should see new LUNs by syminq

discover storage via symm tools (all nodes)

# symcfg disco 
after this , we should see new LUNs by sympd list

disocver storage on vxvm layer (all nodes)

# vxdctl enable 
after this , we should see new LUNs by vxdisk list

discover storage via symm tools again (all nodes) to see dmp device

# symcfg disco 
after this , we should see dmp device in sympd list

Add Disks

add disks under vxvm control (master node)

# vxdisksetup -i <device_name> 
<device_name is the one we see in vxdisk list>

add disk to disk groups

# vxdg -g test_PRDdg adddisk <disk_name>=<device_name> 
# vxdg -g test_PRDdg adddisk <disk_name>=<device_name> 

Extend FS

extend filesystems from primary node

# vxresize -g test_PRDdg -bx test-ora +130G 
# vxresize -g test_PRDdg -bx test-ora +130G 
vxtask --> observe task completion via vxtask


Prepare Works

# syminq -pdevfile > /var/tmp/syminq.before 
freeze cluster if necessary.

Scan LUNs

On all node in the cluster.

List fiber channel controllers:

# cfgadm -la | grep fabric 

c3 fc-fabric connected configured unknown
c4 fc-fabric connected configured unknown

Please note that on SunOS 5.8 you won't see fiber controllers with this command.

Rescan controllers for new devices:

# cfgadm -c configure c3 
# cfgadm -c configure c4 

Attempt to load drivers for device class disk and attach to all possible device instances. devfsadm then creates device special files in /devices and logical links in /dev.

# devfsadm -c disk 

# syminq -pdevfile > /var/tmp/syminq.after 
you should see new LUN by diff syminq.before and syminq.after
# diff /var/tmp/syminq.before /var/tmp/syminq.after 

000290101358 /dev/rdsk/c3t5006048452A6A626d334s2 23E0 7A 1
000290101358 /dev/rdsk/c4t5006048452A6A619d334s2 23E0 10B 0
000290101358 /dev/vx/rdmp/c3t5006048452A6A626d334s2 23E0 7A 1

Solaris 9 with lpfc driver

On Solaris 9 you can add your new targets to /kernel/drv/sd.conf then run "update_drv –f sd" to update the kernel online then devfsadm

Add Disks

for all R1 devices:
format -d <pathname> (any of them - from syminq, and label the devices)

# format -d c3t5006048452A6A626d334 --> choose label and then choose save

on all nodes:
# vxdctl enable 
# symcfg disco 

/usr/lib/vxvm/bin/vxdisksetup -i <rdmp disk name>
# /usr/lib/vxvm/bin/vxdisksetup -i c3t5006048452A6A626d334 

vxdg -g snarcdbprdroot-dg adddisk emc<array><lun>=<rdmp disk name>
# vxdg -g snarcdbprdroot-dg adddisk idmprd01dg110=c3t5006048452A6A626d334s2 

Extend FS

BCV / Clone related

If the new LUN you added has BCV related, please update symdg also and make the BCV device visible to the media server

how to find media server

Do a “symmaskdb -sid <ARRID> -dev <BCVDEV> list assign”, note the WWNs
- “symmaskdb -sid 369 -dev 2D00 list assign” for example.

Do a “symmask -sid <ARRID> list logins | grep <WWN>”, this will show you the corresponding server name
(the relationship is maintained by storage team and may not be 100% exact) –
“symmask -sid 333 list logins | egrep '(5001438003ae5ab4|5001438003ae544e)'” in this example.
Login to the server and verify the WWNs indeed belong to it via “symmask hba list”.

If the new LUN you add has clone device, you can try to find the cfile located in below servers:

cfile located in /opt/bcvbackup/

IBM xiv storage used for extend filesystem:

1.get the volume allocated from storage team:
2.scan volumes
#xiv_fc_admin -R
3.use the command "xiv_devlist" to check the volume and the path to XIV storage system
XIV Devices

Device Size (GB) Paths Vol Name Vol Id XIV Id XIV Host

/dev/dsk/c2t500173804EE40142d16s2 17.2 N/A testdbs01-clstr-data-vol16 1973 7820196 testDBS02
### (if can not find the path number just as above(N/A), can excute commnd "vxdctl enable" ,then check again. maybe at this point you can not see the device you got from storage team;you can scan volume again:"xiv_fc_admin -R", you will find the device you want.)
#xiv_devlist -o device,vol_name,vol_id
XIV Devices

Device Vol Name Vol Id

/dev/vx/dmp/XIV0_15 testdbs01-clstr-data-vol16 1973

4.label the disks with the format command (this step for solaris , linux no need label)
#format -d <pathname> -->choose label and save the xiv luns under vxvm
##vxdctl enable
6.check the status of the disk
#vxdisk list
XIV0_15 auto:none - - online invalid 7.format the disk
# /usr/lib/vxvm/bin/vxdisksetup -i <rdmp disk name>

8.add the disk to disk group
#vxdg -g <dg_name> adddisk <disk_name>=<device name>

eg. vxdg -g dbstage1dg adddisk xiv1961973=XIV0_15
9.extend the volume:
eg. #/etc/vx/bin/vxresize -g dbtest1dg -F vxfs -bxtestdata02 +10G

Categories: Hardware, Storage Tags:

general expect tips

March 23rd, 2012 No comments

export PATH
CMD="uname -a && finishthecoreadm"
#....ksh: finishthecoreadm: not found,......,......
for i in `cat /home/liandy/servers_list`
echo ''>/root/.ssh/known_hosts
expect <<EOF
spawn ssh -l${USERNAME} -p22 $i $CMD
set timeout 20
expect "*yes*"
send "yes\r"
expect "assword:"
send "${PASSWORD}\r"
expect eof

spawn ssh ifconfig
expect {
“*yes*” { send “yes\r” }
“password:” { send “yytt22\r” }
expect eof

spawn ssh
expect {
“*yes*” { send “yes\r” }
“password:” { send “yytt22\r” }
expect “hi\n”
send “youyouyou”

expect “hi”
send “you typed $expect_out(buffer)”
send “but I only expected $expect_out(0,string)”

spawn bc
set bc_id $spawn_id

spawn /bin/sh
set sh_id $spawn_id

set spawn_id $bc_id
send “scale=50\r”

#send -i & expect -i
set spawn_id $ftp_id
send “get $file1\r”;expect “220*ftp> ”
send -i $write_id “success get file” #another spawn_id
send “get $file2\r”;expect “220*ftp>” #return to current spawn_id

expect {
-i $ftp_id
“220*ftp> ” action1
“550*ftp ” action2

same as:

expect {
-i $ftp “220*ftp> ” action
“550*ftp ” action2 #return current spawn_id

proc login(id) { #spawn as parameter
set spawn $id
expect “login:”
send “$name\r”

expect “password:”
send “$password\r”

expect “$prompt”

expect -c ”
set send_human {.1 .3 1 .05 2}

send -h \”I’m hungry. Let’s do lunch.\”

expr of coreutils and exec eval of bash builtins

March 19th, 2012 No comments

1.expr(Evaluates expressions) is a executable program which belongs to coreutils.

for example:

expr 5 + 1 #returns 6

expr index abcdef a #returns 1

For the full man page of expr, you can try info coreutils expr.


Other coreutils packages includes:

base64, basename, cat, chcon, chgrp, chmod, chown, chroot, cksum, comm, cp, csplit, cut, date, dd, df, dir, dircolors, dirname, du, echo, env, expand, expr, factor, false, fmt, fold, groups, head, hostid, id, install, join, link, ln, logname, ls, md5sum, mkdir, mkfifo, mknod, mktemp, mv, nice, nl, nohup, nproc, od, paste, pathchk, pinky, pr, printenv, printf, ptx, pwd, readlink, rm, rmdir, runcon, seq, sha1sum, sha224sum, sha256sum, sha384sum, sha512sum, shred, shuf, sleep, sort, split, stat, stdbuf, stty, sum, sync, tac, tail, tee, test, timeout, touch, tr, true, truncate, tsort, tty, uname, unexpand, uniq, unlink, users, vdir, wc, who, whoami, and yes

2.exec & eval

exec and eval are two of  bash builtins.

The typical usage of exec is on some menu-driven UI. For example, you can write a menu-driven program, let's assume it as menu.viewcards, and then you can put it in the customer's .bash_profile(bash) or .profile(ksh) the following:

exec menu.viewcards

Now when the customer logs on the shell, that menu will replace the current shell without creating a new process.

And for eval, this is used to execute parameters passed to it. For example:

#cmd="date --date=\"+ 350 days\""

#eval $cmd
Mon Mar 4 07:03:22 GMT 2013

zencart make multiple large images instead of thumbnails on product info page

March 18th, 2012 No comments

Firstly, for answer of how to add multiple images on product info page on zencart, please refer to

Then, if you want to make multiple large images instead of thumbnails on product info page , edit file YourSiteSource/includes/modules/additional_images.php, replace the following lines(First back it up!):

$products_image_large = str_replace(DIR_WS_IMAGES, DIR_WS_IMAGES . 'large/', $products_image_directory) . str_replace($products_image_extension, '', $file) . IMAGE_SUFFIX_LARGE . $products_image_extension;
$flag_has_large = file_exists($products_image_large);
$products_image_large = ($flag_has_large ? $products_image_large : $products_image_directory . $file);
$flag_display_large = (IMAGE_ADDITIONAL_DISPLAY_LINK_EVEN_WHEN_NO_LARGE == 'Yes' || $flag_has_large);
$base_image = $products_image_directory . $file;
$thumb_slashes = zen_image($base_image, addslashes($products_name), SMALL_IMAGE_WIDTH, SMALL_IMAGE_HEIGHT);
$thumb_regular = zen_image($base_image, $products_name, SMALL_IMAGE_WIDTH, SMALL_IMAGE_HEIGHT);
$large_link = zen_href_link(FILENAME_POPUP_IMAGE_ADDITIONAL, 'pID=' . $_GET['products_id'] . '&pic=' . $i . '&products_image_large_additional=' . $products_image_large);

// Link Preparation:
$script_link = '<script language="javascript" type="text/javascript"><!--' . "\n" . 'document.write(\'' . ($flag_display_large ? '<a href="javascript:popupWindow(\\\'' . $large_link . '\\\')">' . $thumb_slashes . '<br />' . TEXT_CLICK_TO_ENLARGE . '</a>' : $thumb_slashes) . '\');' . "\n" . '//--></script>';

$noscript_link = '<noscript>' . ($flag_display_large ? '<a href="' . zen_href_link(FILENAME_POPUP_IMAGE_ADDITIONAL, 'pID=' . $_GET['products_id'] . '&pic=' . $i . '&products_image_large_additional=' . $products_image_large) . '" target="_blank">' . $thumb_regular . '<br /><span class="imgLinkAdditional">' . TEXT_CLICK_TO_ENLARGE . '</span></a>' : $thumb_regular ) . '</noscript>';

$alternate_link = '<a href="' . $products_image_large . '" onclick="javascript:popupWindow(\''. $large_link . '\') return false;" title="' . $products_name . '" target="_blank">' . $thumb_regular . '<br />' . TEXT_CLICK_TO_ENLARGE . '</a>';

$link = $script_link . "\n " . $noscript_link;
$link = $alternate_link;


/**$products_image_large = str_replace(DIR_WS_IMAGES, DIR_WS_IMAGES . 'large/', $products_image_directory) . str_replace($products_image_extension, '', $file) . IMAGE_SUFFIX_LARGE . $products_image_extension;
$flag_has_large = file_exists($products_image_large);
$products_image_large = ($flag_has_large ? $products_image_large : $products_image_directory . $file);
$flag_display_large = (IMAGE_ADDITIONAL_DISPLAY_LINK_EVEN_WHEN_NO_LARGE == 'Yes' || $flag_has_large);
$base_image = $products_image_directory . $file;
$thumb_slashes = zen_image($base_image, addslashes($products_name), SMALL_IMAGE_WIDTH, SMALL_IMAGE_HEIGHT);
$thumb_regular = zen_image($base_image, $products_name, SMALL_IMAGE_WIDTH, SMALL_IMAGE_HEIGHT);
$large_link = zen_href_link(FILENAME_POPUP_IMAGE_ADDITIONAL, 'pID=' . $_GET['products_id'] . '&pic=' . $i . '&products_image_large_additional=' . $products_image_large);

// Link Preparation:
$script_link = '<script language="javascript" type="text/javascript"><!--' . "\n" . 'document.write(\'' . ($flag_display_large ? '<a href="javascript:popupWindow(\\\'' . $large_link . '\\\')">' . $thumb_slashes . '<br />' . TEXT_CLICK_TO_ENLARGE . '</a>' : $thumb_slashes) . '\');' . "\n" . '//--></script>';

$noscript_link = '<noscript>' . ($flag_display_large ? '<a href="' . zen_href_link(FILENAME_POPUP_IMAGE_ADDITIONAL, 'pID=' . $_GET['products_id'] . '&pic=' . $i . '&products_image_large_additional=' . $products_image_large) . '" target="_blank">' . $thumb_regular . '<br /><span class="imgLinkAdditional">' . TEXT_CLICK_TO_ENLARGE . '</span></a>' : $thumb_regular ) . '</noscript>';

// $alternate_link = '<a href="' . $products_image_large . '" onclick="javascript:popupWindow(\''. $large_link . '\') return false;" title="' . $products_name . '" target="_blank">' . $thumb_regular . '<br />' . TEXT_CLICK_TO_ENLARGE . '</a>';

$link = $script_link . "\n " . $noscript_link;
// $link = $alternate_link;

And add line(if you upload large images to main directory):

echo "<img src=\"{$file}\" />";

before the following line:

// List Box array generation:

Now save and refresh the product info page of zencart, you'll see the fancy!

Categories: IT Architecture, Programming Tags:

solved zencart error Paypal does not allow your country of residence to ship to the country you wish to

March 17th, 2012 2 comments

Paypal has a restriction(Paypal does not allow your country of residence to ship to the country you wish to) that if the client buys an item when their billing address & shipping address are not in the same country, paypal will refuse the payment due to paypal seller protection policy. But sometimes a customer may buy a product delivered to a friend whose shipping address will not be the same as his billing address.

To solve this, if you're using paypal Website Payments Standard - IPN, you can do via the steps below:

go to PAYMENTS > "PayPal Website Payments Standard - IPN" > Address Override, Set the address override to "0". If you set it to 1, the address in Zen Cart overrides the Paypal address (the billing address), so when when it compares the addresses, if they don't match exactly, you get this error.

And, if you're using PayPal Express Checkout, set Express Checkout: Require Confirmed Address to No, and then edit the file below:


change from $options['ADDROVERRIDE'] = 1; to $options['ADDROVERRIDE'] = 0; Please note that there're two occurrences in this file which needs modification.

After the modification, the customers will now be able to set their billing address & shipping address to different ones.


Categories: IT Architecture, Programming Tags:

tips about nagios notes_url action_url

March 16th, 2012 2 comments

nagios has two useful parameters, i.e. notes_url & action_url.

Firstly, you can modify notes_url in template configuration file:
# Generic service definition template - This is NOT a real service, just a template!
define service{
name generic-service ; The 'name' of this service template
active_checks_enabled 1 ; Active service checks are enabled
passive_checks_enabled 1 ; Passive service checks are enabled/accepted
parallelize_check 1 ; Active service checks should be parallelized (disabling this can lead to major performance problems)
obsess_over_service 1 ; We should obsess over this service (if necessary)
check_freshness 0 ; Default is to NOT check service 'freshness'
notifications_enabled 1 ; Service notifications are enabled
event_handler_enabled 1 ; Service event handler is enabled
flap_detection_enabled 1 ; Flap detection is enabled
failure_prediction_enabled 1 ; Failure prediction is enabled
process_perf_data 1 ; Process performance data
retain_status_information 1 ; Retain status information across program restarts
retain_nonstatus_information 1 ; Retain non-status information across program restarts
is_volatile 0 ; The service is not volatile
check_period 24x7 ; The service can be checked at any time of the day
max_check_attempts 3 ; Re-check the service up to 3 times in order to determine its final (hard) state
normal_check_interval 10 ; Check the service every 10 minutes under normal conditions
retry_check_interval 2 ; Re-check the service every two minutes until a hard state can be determined
contact_groups admins ; Notifications get sent out to everyone in the 'admins' group
notification_options w,u,c,r ; Send notifications about warning, unknown, critical, and recovery events
notification_interval 60 ; Re-notify about service problems every hour
notification_period 24x7 ; Notifications can be sent out at any time

Then, when you define a new service check and want to use that notes_url in combination with notes_url, then when define the service check:
define service{
use generic-service ;using this template will auto enable notes_url, and it will replace $SERVICEDESC$ macro in template with service_description below
servicegroups HttpCheck
service_description aboutus #this will be used to replace $SERVICEDESC$ macro in template definition
check_command check_http!-u "/aboutus/" -t 60 -w 15 -c 30 -f follow -l -r 010
action_url http://$HOSTADDRESS$/aboutus/ #there'll be a "cloud icon" in nagios web gui when you add this line

Now here's the result after adding nagios action_url and notes_url:

You can try click on each icons and see the fancy thing!

Here's the snapshot:

nagios notes_url action_url(click to see the full size image)

veritas vxdg disabled and cannot be deport – Some volumes in the disk group are in use

March 15th, 2012 No comments

root@doxer # vxdg list
test_DG disabled 1115974897.107.doxer

root@doxer # vxdg deport test_DG
VxVM vxdg ERROR V-5-1-584 Disk group test_DG: Some volumes in the disk group are in use

And when using df -h, there's error message:
root@doxer # df -h
Filesystem size used avail capacity Mounted on
df:cannot statvfs /doxer/archive: No such file or directory

Using fuser to check which processes are using the filesystem, but also met error:
root@doxer # fuser -c /doxer/archive
/BCV/btch01: fuser: Invalid argument

Ok, then I tried several things but failed also. And finally, here's the resolution:
First, umount the error filesystem:
root@doxer # umount /doxer/archive

Second, deport the DG:
root@doxer # vxdg deport test_DG

Then, vxdg list showed me that the DG had been deported.
root@doxer # vxdg list

At last after using vxdg -C import test_DG, the DG is now in enabled status again.

root@doxer # vxdg list
test_DG enabled,cds 1115974897.107.doxer

Categories: Hardware, Storage Tags:

IBM bigfix enterprise fixlets tasks baselines howto

March 13th, 2012 No comments

First part, fixlets:

1.start TEM Console(IBM Tivoli Endpoint Manager Console), navigate to All Content -> Fixlets and Tasks -> All -> By Site ->Master Action Site. Click on "Master Action Site", then on the right, right click "Create New Fixlet". On Actions Tab, remember to check "Include custom success criteria" and then click "Edit" beside the item and check "...all lines of the action script have completed successfully".

2.Remember to include Relevance to target the appropriate hosts.

3.In the "Action Script" textbox, enter the following:

delete __createfile
createfile until <EOF>
df -h > /tmp/andyli.txt
echo "{name of operating system}" >> /tmp/andyli.txt
if [ -f /tmp/andyli.txt ]
ls -l /etc/hosts >> /tmp/ls_andyli.out


move __createfile
wait chmod u+x
wait /bin/sh -c ./
continue if {exit code of active action =0 }

wait /bin/sh -c "ls /etc > /tmp/ls_andyli2.out"

Then click ok to save the script. After that, check your new fixlet item and click on "Take Action". Log file can be found at   /var/opt/BESClient/__BESData/__Global/Logs.

If you want to test whether vsftpd(or others like spacewalk etc), here's the actionscript for you:

delete __createfile
createfile until <EOF>
Is_Vsftpd=`/bin/rpm -qa|grep vsftpd`
if [ $Is_Vsftpd ]
exit 0
exit 1


move __createfile
wait chmod u+x
wait /bin/sh -c ./
continue if {exit code of active action =0 }

//now please go on with the patching
wait /bin/ksh -c "echo good > /tmp/vsftpd_andyli"

In general, vsftpd is installed, then "good" will be echoed to /tmp/vsftpd_andyli

Second Part, baselines:

For IBM big fix enterprise baseline, you can add fixlets or tasks to baseline. One thing to note is that for a baseline of chained actions you need to select the box "Use custom action settings" and select "set action settings" - in the new window which opens uncheck the box "Run all member actions of action group regardless of errors".


1.For IBM big fix enterprise client agent download/installation of the most up to date ones, please refer to

For IBM big fix enterprise client agent download/installation of old ones, please refer to

Please note that you should get the version of the agent which goes with the TEM Master.

After downloading and installing the packages on your different machines, you will need to:

a.wget the masthead(on port 52311 which is the default port of master server, under /masthead) to /etc/opt/BESClient/actionsite.afxm (this is the config file that tells the client where the master is and such)

b.then start the client using /etc/init.d/besclient start. Check that the client registered via the console (you should see it added to the Computers tree.)

Note for Windows machines there is a deployed tool avaible in the Tivoli programs folder – this can be used so long as appropriate local domain admin rights are available.

More info on

2.I know you're sure to familiar with this picture:

ibm bigfix tivoli endpoint manager console(click to see the full image)

ntop installation and usage howto

March 7th, 2012 1 comment

###ntop installation

PS:if you encountered errors like the following during nagios installation using ./, then you may be interested in this article:

1.configure: error: Unable to find RRD at /usr/local: please use --with-rrd-home=DIR

2.checking for GeoIP_record_by_ipnum in -lGeoIP... no
checking for GeoIP_name_by_ipnum_v6 in -lGeoIP... no
Please install GeoIP (

3.checking for pcap_lookupdev in -lpcap... no
It looks that you don't have the libpcap distribution installed.
Download, compile and, optionally, install it.
When finished please re-run this program.
You can download the latest source tarball at
configure: error: The LBL Packet Capture Library, libpcap, was not found!

Now let's go to the topic of this article:
#1.install rrdtool

#PS: if you encounter error "Pango-WARNING **: failed to choose a font, expect ugly output", then you'll need install xorg-x11-font* as stated below

yum -y install cairo-devel libxml2-devel pango-devel pango libpng-devel freetype freetype-devel libart_lgpl-devel gdbm-devel xorg-x11-font*
wget '' -O ../rrdtool-1.4.7.tar.gz
cd ../
tar zxvf rrdtool-1.4.7.tar.gz
export PKG_CONFIG_PATH=/usr/lib/pkgconfig/
cd rrdtool-1.4.7
make install
cd /opt
ln -s rrdtool-1.4.7 rrdtool
cd rrdtool
cd share/rrdtool/examples/
cp stripes.png /var/www/html/crystal #then visit the picture to see whether it's loading or not

#2.install libpcap from source
wget ''
configure/make/make install

#3.install geoip
wget ''
configure/make/make install
#4.install ntop
./ --with-rrd-home=/opt/rrdtool
make install
useradd ntop
chown -R ntop.ntop /usr/local/share/ntop
/usr/local/bin/ntop -A -u ntop -P /usr/local/share/ntop
/usr/local/bin/ntop -u ntop -P /usr/local/share/ntop -d
go to http://ip:3000

========PS=========== #an overview of ntop, the "utilization examples" part is the most useful I think #man page for ntop
[root@doxer ~]# /usr/local/bin/ntop --help #online help for ntop

how to check the uptime of a unix like system

March 7th, 2012 No comments

1.Use uptime(or last) command
[root@doxer ~]# uptime
23:56:01 up 34 days, 20:20, 5 users, load average: 0.19, 0.10, 0.03

[root@doxer ~]# last -x |grep reboot|head -n1
reboot system boot 2.6.18-238.12.1. Tue Mar 6 07:59 (09:06)

However, uptime command in some cases will not show the real uptime os linux or unix, for example, the corruption of utmp/wtmp/btmp files or incorrect time in obp on some sparc infrastructure machines will show you the incorrect time os system uptime.

2.Check the running time os init process
[root@doxer ~]# ps -ef|grep init|sed -n '1p'|awk '{print $5}'

This is somehow more accurate to reflect the uptime of OS. The command above with pipe, awk, sed may seem a little dizzy, but in general, it's just about showing the fifth column(STIME) of init process