Author Archive

resolved – check backend OHS httpd servers for BIG ip F5 LTM VIP

May 23rd, 2014 No comments

Assume you want to check the OHS or httpd servers one LTM VIP is routing traffic to. Then here's the steps:

  1. get the ip address of VIP;
  2. log on LTM's BUI. Local traffic-> virtual servers -> virtual server list, search ip
  3. click "edit" below column "resource"
  4. note down default pool
  5. search pool name in local traffic -> virtual servers -> pools -> pool list
  6. click the number below column members. Then you'll find the OHS servers and ports the VIP will route traffic to.

test telnet from VLAN on cisco router device

May 22nd, 2014 No comments

If you want to test telnet connection from one vlan to specific destination IP, here is the howto:

test-router# telnet 80 source vlan 125
Connected to
Escape character is '^]'.

Good luck.

Resolved – input_userauth_request: invalid user root

May 15th, 2014 2 comments

Today when I tried to ssh to one linux box but it failed, and /var/log/secure gave the following messages:

May 15 04:05:07 testbox sshd[22925]: User root from not allowed because not listed in AllowUsers
May 15 04:05:07 testbox sshd[22928]: input_userauth_request: invalid user root
May 15 04:05:07 testbox unix_chkpwd[22929]: password check failed for user (root)
May 15 04:05:07 testbox sshd[22925]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost= user=root
May 15 04:05:09 testbox sshd[22925]: Failed password for invalid user root from port 50362 ssh2
May 15 04:05:10 testbox unix_chkpwd[22930]: password check failed for user (root)
May 15 04:05:11 testbox sshd[22928]: Connection closed by

Then I had a check of /etc/ssh/sshd_config and modified the following:

[root@testbox ~]# egrep 'PermitRoot|AllowUser' /etc/ssh/sshd_config
PermitRootLogin yes #change this to yes
#AllowUsers testuser #comment out this

Later, restart sshd, service sshd restart, and later ssh worked.

Categories: IT Architecture, Linux, Systems Tags: ,

resolved – fsinfo ERROR: Stale NFS file handle POST

May 15th, 2014 No comments

Today when I tried mount NFS share from one NFS server, it timeout with "mount.nfs: Connection timed out".

I tried to search something in /var/log/messages but no useful info there was found. So I used tcpdump on NFS client:

[root@dcs-hm1-qa132 ~]# tcpdump -nn -vvv host #server is, client is
23:49:11.598407 IP (tos 0x0, ttl 64, id 26179, offset 0, flags [DF], proto TCP (6), length 96) > 40 null
23:49:11.598741 IP (tos 0x0, ttl 62, id 61186, offset 0, flags [DF], proto TCP (6), length 80) > reply ok 24 null
23:49:11.598812 IP (tos 0x0, ttl 64, id 26180, offset 0, flags [DF], proto TCP (6), length 148) > 92 fsinfo fh Unknown/0100010000000000000000000000000000000000000000000000000000000000
23:49:11.599176 IP (tos 0x0, ttl 62, id 61187, offset 0, flags [DF], proto TCP (6), length 88) > reply ok 32 fsinfo ERROR: Stale NFS file handle POST:
23:49:11.599254 IP (tos 0x0, ttl 64, id 26181, offset 0, flags [DF], proto TCP (6), length 148) > 92 fsinfo fh Unknown/010001000000000000002FFF000002580000012C0007B0C00000000A00000000
23:49:11.599627 IP (tos 0x0, ttl 62, id 61188, offset 0, flags [DF], proto TCP (6), length 88) > reply ok 32 fsinfo ERROR: Stale NFS file handle POST:

The reason of "ERROR: Stale NFS file handle POST" may caused by the following reasons:

1.The NFS server is no longer available
2.Something in the network is blocking
3.In a cluster during failover of NFS resource the major & minor numbers on the secondary server taking over is different from that of the primary.

To resolve the issue, you can try bounce NFS service on NFS server using /etc/init.d/nfs restart.

Categories: Hardware, NAS, Storage Tags:

resolved – show kitchen sink buttons when wordpress goes to fullscreen mode

April 11th, 2014 No comments

When you click the full-screen button of wordpress TinyMCE, wordpress will go to "Distraction-Free Writing mode", which benefits as the name suggests. However, you'll also find the toolbox of TinyMCE will only show a limited number of buttons and the second line of the toolbox(kitchen sink) will not show at all(I tried install plugin such as ultimate TinyMCE or advanced TinyMCE, but the issue remained):

full-screenPreviously, you can type ALT+SHIFT+G to go to another type of fullscreen mode, which has all buttons include kitchen sink ones. However, seems now the updated version of wordpress has disabled this feature.

To resolve this issue, we can insert the following code in functions.php of your theme:

function my_mce_fullscreen($buttons) {
$buttons[] = 'fullscreen';
return $buttons;
add_filter('mce_buttons', 'my_mce_fullscreen');

Later, the TinyMCE will have two full-screen button:

full-screen buttonsMake sure to click the SECOND full-screen button. When you do so, the editor will transform to the following appearance:

full-screen with kitchen sinkI assume this is what you're trying for, right?



Categories: Misc Tags:

add horizontal line button in wordpress

April 11th, 2014 No comments

There're three methods for you to add a horizontal line button in wordpress:

Firstly, switch to "Text" mode, and enters <hr />.

Secondly, add the following in functions.php of your wordpress theme:

function enable_more_buttons($buttons) {
$buttons[] = 'hr';
return $buttons;
add_filter("mce_buttons", "enable_more_buttons");

horizontal line

Thirdly, you can install plugin "Ultimate TinyMCE", and in its setting, you can enable horizontal line button there in one click! This is my recommendation.

ultimate tinymce

Categories: Misc Tags: ,

linux tips

April 10th, 2014 No comments
Linux Performance & Troubleshooting
For Linux Performance & Troubeshooting, please refer to another post - Linux tips - Performance and Troubleshooting
Linux system tips
ls -lu(access time, like cat file) -lt(modification time, like vi, ls -l defaults to use this) -lc(change time, chmod), stat ./aa.txt <UTC>
ctrl +z #bg and stopped
%1     #fg and running
%1 & #bg and running
man dd > dd.txt #or "man dd | col -b > dd.txt"
cat > listbkup.rman << EOF
pgrep -flu oracle  # processes owned by the user oracle
watch free -m #refresh every 2 seconds
pmap -x 30420 #memory mapping.
openssl s_client -connect localhost:636 -showcerts #verify ssl certificates, or 443
echo | openssl s_client -connect your_url_without_https:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -noout -dates #get expire date
echo | openssl s_client -connect your_url_without_https:443 < /dev/null 2>/dev/null | openssl x509 -text -in /dev/stdin | grep "Signature Algorithm" #get Signature Algorithm.
Signature Algorithm: sha1WithRSAEncryption for SHA1
Signature Algorithm: sha256WithRSAEncryption for SHA2
unable to load certificate for not connectable ones
And here's a script to fullfill this:

for CERT in \
url1:443 \
url2:443 \
echo "===begin cert ${CERT}==="
echo | openssl s_client -connect ${CERT} < /dev/null 2>/dev/null | openssl x509 -text -in /dev/stdin | grep "Signature Algorithm"|head -n1

openssl s_client -connect url:443
openssl x509 -text -in cacert.pem -noout
openssl x509 -dates -in cacert.pem -noout
openssl x509 -purpose -in cacert.pem -noout
openssl req -text -in robots.req.pem  -verify -noout
openssl genrsa -out private.pem 1024
openssl rsa -in private.pem -out public.pem -outform PEM -pubout #get public key from private key
echo 'too many secrets' > file.txt
openssl rsautl -encrypt -inkey public.pem -pubin -in file.txt -out file.ssl #it will only encrypt data up to the key size
openssl rsautl -decrypt -inkey private.pem -in file.ssl -out decrypted.txt
You can generate a self-signed certificate with steps as described here (private key, CSR, generate certificate). More on how SSL works is described here and here. (Signing a message, means authentifying that you have yourself assured the authenticity of the message (most of the time it means you are the author, but not neccesarily). The message can be a text message, or someone else's certificate. To sign a message, you create its hash, and then encrypt the hash with your private key, you then add the encrypted hash and your signed certificate with the message. The recipient will recreate the message hash, decrypts the encrypted hash using your well known public key stored in your signed certificate, check that both hash are equals and finally check the certificate. When an encrypted session is established, the Encryption strength((40-bit, 56-bit, 128-bit, 256-bit)) is determined by the capability of the web browser, SSL certificate, web server, and client computer operating system.)
blockdev --getbsz /dev/xvda1 #get blocksize of FS
dumpe2fs /dev/xvda1 |grep 'Block size'
sed -i.bak2014 's/HISTSIZE=1000/HISTSIZE=100000/' /etc/profile
echo 71_testhost_emgc | sed -re 's/.*_(slce.*)_.*/\1/g' #output testhost
sed -re 's/User_Alias USERS_SDITAS.*/&, xiaozliu/g' a.txt #'&' refers to matched content
echo 'export HISTTIMEFORMAT=%h/%d - %H:%M:%S' >> /etc/profile
ovm svr ls|sort -rn -k 4 #sort by column 4
cat a1|sort|uniq -c |sort #SUS
ovm svr ls|uniq -f3 #skip the first three columns, this will list only 1 server per pool
for i in <all OVMs>;do ( $i &);done #instead of using nohup &
ovm vm ls|egrep "`echo testhost{0\|,1\|,2\|,3\|,4}|tr -d '[:space:]'`"
cat a|awk '{print $5}'|tr '\n' ' '
awk '{print NF" "FILENAME;exit}' file.txt #get column count of file along with filenames
getopt #getopts is builtin
date -d '1970-1-1 1276059000 sec utc'
date -d '2010-09-11 23:20' +%s
find . -name '*txt'|xargs tar cvvf a.tar
find . -maxdepth 1
for i in `find /usr/sbin/ -type f ! -perm -u+x`;do chmod +x $i;done #files that has no execute permisson for owner
find ./* -prune -print #-prune,do not cascade
find . -fprint file #put result to file
tar tvf a.tar  --wildcards "*ipp*" #globbing patterns
tar xvf bfiles.tar --wildcards --no-anchored 'b*'
tar --show-defaults
tar cvf a.tar --totals *.txt #show speed
tar --append --file=collection.tar rock #add rock to collection.tar
tar --update -v -f collection.tar blues folk rock classical #only append new or updated ones, not replace
tar --delete --file=collection.tar blues #not on tapes
tar -c -f archive.tar --mode='a+rw'
tar -C sourcedir -cf - . | tar -C targetdir -xf - #copy directories
tar -c -f jams.tar grape prune -C food cherry #-C,change dir, foot file cherry under foot directory
find . -size -400 -print > small-files
tar -c -v -z -T small-files -f little.tgz
tar -cf src.tar --exclude='*.o' src #multiple --exclude can be specified
expr 5 - 1
rpm2cpio ./ash-1.0.1-1.x86_64.rpm |cpio -ivd
eval $cmd
exec menu.viewcards #same to .
ls . | xargs -0 -i cp ./{} /etc #-i,use \n as separator, just like find -exec. -0 for space in filename. find -print0 use space to separate, not enter.(-i or -I {} for revoking filenames in the middle)
ls | xargs -t -i mv {} {}.old #mv source should exclude /,or unexpected errors may occur
mv --strip-trailing-slashes source destination
ls |xargs file /dev/fd/0 #replace -
find . -type d |xargs -i du -sh {} |awk '$1 ~ /G/'
ovm svr ls|awk '$NF ~ /QA_GA_DC2$/'
ypcat passwd|awk -F: '{if($1 ~ /^user1$|^user2$/) print}'|grep false
syminq -pdevfile |awk '!/^#/ {print $1,$4,$5}' #ignore lines started with #
sed -i '/virt[0-9]\{5\}/!d' /var/tmp/*.status #only show SDI names.
ls -l -I "*out*" #not include out
for i in `ls -I shared -I oracle`;do du -sh $i;done #exclude shared and oracle directories
find . -type f -name "*20120606" -exec rm {} \; #do not need rm -rf. find . -type f -exec bash -c "ls -l '{}'" \;
ps -ef|grep init|sed -n '1p'
pstree -aAhlup [ PID | USER ]
cut -d ' ' -f1,3 /etc/mtab #first and third
seq 15 21 #print 15 to 21
seq -s" " 15 21 #or echo {15..21}. use space as separator


Categories: IT Architecture, Linux, Systems Tags:

perl tips

April 2nd, 2014 No comments
#!/usr/bin/perl -w
my @animals = ("dog", "pig", "cat");
print "The last element of array \$animals is : ".$animals[$#animals]."\n";
print "@animals"."\n"; #will print values of array, delimitered by space
print $#animals."\n"; #the last key number of array, $#animals+1 is the number of array
print "more than 2 animals found\n";
print "less than 2 animals found\n"
print $_."\n";
my %fruit_color=("apple", "red", "banana", "yellow");
print "Color of banana is : ".$fruit_color{"banana"}."\n";

for $char (keys %fruit_color)
print("$char => $fruit_color{$char}\n");

my $variables = {
scalar  =>  {
description => "single item",
sigil => '$',
array   =>  {
description => "ordered list of items",
sigil => '@',
hash    =>  {
description => "key/value pairs",
sigil => '%',
print "Scalars begin with a $variables->{'scalar'}->{'sigil'}\n";

##Files and I/O
open (my $passwd, "<", "/etc/passwd2") or die ("can  a not open");
while (<$passwd>) {
print $_ if $_ =~ "test";
close $passwd or die "$passwd: $!";
my $next = "doing a first";
$next =~ s/first/second/;
print $next."\n";

my $email = "testaccount\";
if ($email =~ /([^@]+)@(.+)/) {
print "Username is : $1\n";
print "Hostname is : $2\n";

sub multiply{
my ($num1, $num2) = @_;
my $result = $num1 * $num2;
return $result;

my $result2 = multiply(3, 5);
print "3 * 5 = $result2\n";

! system('date') or die("failed it"); #if a subroutine returns ok, it'll return 0
Categories: IT Architecture, Perl, Programming Tags:

resolved – /lib/ bad ELF interpreter: No such file or directory

April 1st, 2014 No comments

When I ran perl command today, I met problem below:

[root@test01 bin]# /usr/local/bin/perl5.8
-bash: /usr/local/bin/perl5.8: /lib/ bad ELF interpreter: No such file or directory

Now let's check which package /lib/ belongs to on a good linux box:

[root@test02 ~]# rpm -qf /lib/

So here's the resolution to the issue:

[root@test01 bin]# yum install -y glibc.x86_64 glibc.i686 glibc-devel.i686 glibc-devel.x86_64 glibc-headers.x86_64

Categories: IT Architecture, Kernel, Linux, Systems Tags:

resolved – sudo: sorry, you must have a tty to run sudo

April 1st, 2014 4 comments

The error message below sometimes will occur when you run a sudo <command>:

sudo: sorry, you must have a tty to run sudo

To resolve this, you may comment out "Defaults requiretty" in /etc/sudoers(revoked by running visudo). Here is more info about this method.

However, sometimes it's not convenient or even not possible to modify /etc/sudoers, then you can consider the following:

echo -e "<password>\n"|sudo -S <sudo command>

For -S parameter of sudo, you may refer to sudo man page:

-S' The -S (stdin) option causes sudo to read the password from the standard input instead of the terminal device. The password must be followed by a newline character.

So here -S bypass tty(terminal device) to read the password from the standard input. And by this, we can now pipe password to sudo.

Resolved – print() on closed filehandle $fh at ./ line 6.

March 19th, 2014 No comments

You may find that print sometimes won't work as expected in perl, for example:

[root@centos-doxer test]# cat
use warnings;
select $fh;
close $fh;
print "test";

You may expect "test" to be printed, but actually you got error message:

print() on closed filehandle $fh at ./ line 6.

So how's this happened? Please see my explanation:

[root@centos-doxer test]# cat
use warnings;
select $fh;
close $fh; #here you closed $fh filehandle, but you should now reset filehandle to STDOUT
print "test";

Now here's the updated script:

use warnings;
select $fh;
close $fh;
select STDOUT;
print "test";

This way, you'll get "test" as expected!


Categories: IT Architecture, Perl, Programming Tags:

set vnc not asking for OS account password

March 18th, 2014 No comments

As you may know, vncpasswd(belongs to package vnc-server) is used to set password for users when connecting to vnc using a vnc client(such as tightvnc). When you connect to vnc-server, it'll ask for the password:

vnc-0After you connect to the host using VNC, you may also find that the remote server will ask again for OS password(this is set by passwd):

vnc-01For some cases, you may not want the second one. So here's the way to cancel this behavior:




Categories: IT Architecture, Linux, Systems Tags: ,

stuck in PXE-E51: No DHCP or proxyDHCP offers were received, PXE-M0F: Exiting Intel Boot Agent, Network boot canceled by keystroke

March 17th, 2014 No comments

If you installed your OS and tried booting up it but stuck with the following messages:


Then one possibility is that, the configuration for your host's storage array is not right. For instance, it should be JBOD but you had configured it to RAID6.

Please note that this is only one possibility for this error, you may search for PXE Error Codes you encoutered for more details.


  • Sometimes, DHCP snooping may prevent PXE functioning, you can read more
  • STP(Spanning-Tree Protocol) makes each port wait up to 50 seconds before data is allowed to be sent on the port. This Delay in turn can cause problems with some applications/protocols (PXE, Bootworks, etc.). To alleviate the problem, Porfast was implemented on Cisco devices, the terminology might differ between different vendor devices. You can read more
  • ARP caching

Oracle BI Publisher reports – send mail when filesystems getting full

March 17th, 2014 No comments

Let's assume you have one Oracle BI Publisher report for filesystem checking. And now you want to write script for checking that report page and send mail to system admins when filesystems are getting full. As the default output of Oracle BI Publisher report needs javascript to work, and as you may know javascript is evil that wget/curl can not get them, so after log on, the next step you need to do is to find the html version's url of that report for you to use in your script(and the html page has all records when javascript one has only part of them):




Let's assume that the html's url is "", and the display of it was like the following:

bi report

Then here goes the script that will check this page for hosts that has less than 10% available space and send mail to system admins:

use HTML::Strip;
#hosts that do not need reporting
my @remove_list = qw(;
system("rm -f spacereport.html");
system("wget -q --no-proxy --no-check-certificate --post-data 'id=admin&passwd=password' '' -O spacereport.html");

#or just @spacereport=<$fh>;

#change array to hash
map {$pos{$index++}=$_} @spacereport;

#get location of <table> and </table>
#sort numerically ascending
for $char (sort {$a<=>$b} (keys %pos))
if($pos{$char} =~ /<table class="c27">/)

if($pos{$char} =~ /<\/table>/)


#get contents between <table> and </table>


#get clear text between <table> and </table>
my $hs=HTML::Strip->new();
my $clean_text = $hs->parse($table_htmlstr);


#remove empty array element
@array_filtered=grep { !/^\s+$/ } @array_filtered;

#remove entries from showing
@index_all = grep { $array_filtered[$_] =~ /$remove_list_s/ } 0..$#array_filtered;

for($i=0;$i<=$#index_all;$i++) {
@index_all_one = grep { $array_filtered[$_] =~ /$remove_list_s/ } 0..$#array_filtered;

system("rm -f space_mail_warning.txt");
select $fh_mail_warning;
#put lines that has free space lower than 10% to space_mail_warning.txt
if($array_filtered[$j+2] <= 10){
print "Host: ".$array_filtered[$j]."\n";
print "Part: ".$array_filtered[$j+1]."\n";
print "Free(%): ".$array_filtered[$j+2]."\n";
print "Free(GB): ".$array_filtered[$j+3]."\n";
print "============\n\n";
close $fh_mail_warning;

system("rm -f space_mail_info.txt");
select $fh_mail_info;
#put lines that has free space lower than 15% to space_mail_info.txt
if($array_filtered[$j+2] <= 15){
print "Host: ".$array_filtered[$j]."\n";
print "Part: ".$array_filtered[$j+1]."\n";
print "Free(%): ".$array_filtered[$j+2]."\n";
print "Free(GB): ".$array_filtered[$j+3]."\n";
print "============\n\n";
close $fh_mail_info;

#send mail
#select STDOUT;
if(-s "space_mail_warning.txt"){
system('cat space_mail_warning.txt | /bin/mailx -s "Space Warning - please work with component owners to free space"');
} elsif(-s "space_mail_info.txt"){
system('cat space_mail_info.txt | /bin/mailx -s "Space Info - Space checking mail"');

Categories: IT Architecture, Perl, Programming Tags:

wget and curl tips

March 14th, 2014 No comments

Imagine you want to download all files under, and not files under except for directory 'downloads', then you can do this:

wget -r --level 100 -nd --no-proxy --no-parent --reject "index.htm*" --reject "*gif" '' #--level 100 is large enough, as I've seen no site has more than 100 levels of sub-directories so far.

wget -p -k --no-proxy --no-check-certificate --post-data 'id=username&passwd=password' <url> -O output.html

wget --no-proxy --no-check-certificate --save-cookies cookies.txt <url>

wget --no-proxy --no-check-certificate --load-cookies cookies.txt <url>

curl -k -u 'username:password' <url>

curl -k -L -d id=username -d passwd=password <url>

curl --data "loginform:id=username&loginform:passwd=password" -k -L <url>

Here's one curl example to get SSL certs info on LTM:


agent="Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; InfoPath.2)"

curl -v -L -k -A "$agent" -c ${path}/cookie "https://ltm-url/tmui/login.jsp?msgcode=1&"

curl -v -L -k -A "$agent" -b ${path}/cookie -c ${path}/cookie -e "https://ltm-url/tmui/login.jsp?msgcode=1&" -d "username=myusername&passwd=mypassword" "https://ltm-url/tmui/logmein.html?msgcode=1&"

curl -v -L -k -A "$agent" -b ${path}/cookie -c ${path}/cookie -o ${path_root}/certs-env.html "https://ltm-url/tmui/Control/jspmap/tmui/locallb/ssl_certificate/list.jsp?&startListIndex=0&showAll=true"

Now you can have a check of /var/tmp/certs-env.html for SSL certs info of Big IP VIPs.

resolved – ssh Read from socket failed: Connection reset by peer and Write failed: Broken pipe

March 13th, 2014 No comments

If you met following errors when ssh to linux box:

Read from socket failed: Connection reset by peer

Write failed: Broken pipe

Then there's one possibility that the linux box's filesystem was corrupted. As in my case there's output to stdout:

EXT3-fs error ext3_lookup: deleted inode referenced

To resolve this, you need make linux go to single user mode and fsck -y <filesystem>. You can get corrupted filesystem names when booting:

[/sbin/fsck.ext3 (1) -- /usr] fsck.ext3 -a /dev/xvda2
/usr contains a file system with errors, check forced.
/usr: Directory inode 378101, block 0, offset 0: directory corrupted

(i.e., without -a or -p options)

[/sbin/fsck.ext3 (1) -- /oem] fsck.ext3 -a /dev/xvda5
/oem: recovering journal
/oem: clean, 8253/1048576 files, 202701/1048233 blocks
[/sbin/fsck.ext3 (1) -- /u01] fsck.ext3 -a /dev/xvdb
u01: clean, 36575/14548992 files, 2122736/29081600 blocks

So in this case, I did fsck -y /dev/xvda2 && fsck -y /dev/xvda5. Later reboot host, and then everything went well.


If two VMs are booted up in two hypervisors and these VMs shared the same filesystem(like NFS), then after fsck -y one FS and booted up the VM, the FS will corrupt soon as there're other copies of itself is using that FS. So you need first make sure that only one copy of VM is running on hypervisors of the same server pool.

Categories: IT Architecture, Kernel, Linux, Systems Tags:

tcpdump & wireshark tips

March 13th, 2014 No comments

tcpdump [ -AdDefIKlLnNOpqRStuUvxX ] [ -B buffer_size ] [ -c count ]

[ -C file_size ] [ -G rotate_seconds ] [ -F file ]
[ -i interface ] [ -m module ] [ -M secret ]
[ -r file ] [ -s snaplen ] [ -T type ] [ -w file ]
[ -W filecount ]
[ -E spi@ipaddr algo:secret,... ]
[ -y datalinktype ] [ -z postrotate-command ] [ -Z user ] [ expression ]

#general format of a tcp protocol line

src > dst: flags data-seqno ack window urgent options
Src and dst are the source and destination IP addresses and ports.
Flags are some combination of S (SYN), F (FIN), P (PUSH), R (RST), W (ECN CWR) or E (ECN-Echo), or a single '.'(means no flags were set)
Data-seqno describes the portion of sequence space covered by the data in this packet.
Ack is sequence number of the next data expected the other direction on this connection.
Window is the number of bytes of receive buffer space available the other direction on this connection.
Urg indicates there is 'urgent' data in the packet.
Options are tcp options enclosed in angle brackets (e.g., <mss 1024>).

tcpdump -D #list of the network interfaces available
tcpdump -e #Print the link-level header on each dump line
tcpdump -S #Print absolute, rather than relative, TCP sequence numbers
tcpdump -s <snaplen> #Snarf snaplen bytes of data from each packet rather than the default of 65535 bytes
tcpdump -i eth0 -S -nn -XX vlan
tcpdump -i eth0 -S -nn -XX arp
tcpdump -i bond0 -S -nn -vvv udp dst port 53
tcpdump -i bond0 -S -nn -vvv host testhost
tcpdump -nn -S -vvv "dst host and (dst port 1521 or dst port 6200)"

tcpdump -nn -S udp dst port 111 #note that telnet is based on tcp protocol, NOT udp. So if you want to test UDP connection(udp is connection-less), then you must start up the app, then use tcpdump to test.

tcpdump -nn -S udp dst portrange 1-1023

Wireshark Capture Filters (in Capture -> Options)

Wireshark DisplayFilters (in toolbar)


Host A sends a TCP SYNchronize packet to Host BHost B receives A's SYN

Host B sends a SYNchronize-ACKnowledgement

Host A receives B's SYN-ACK

Host A sends ACKnowledge

Host B receives ACK.
TCP socket connection is ESTABLISHED.

TCP Three Way Handshake



The upper part shows the states on the end-point initiating the termination.

The lower part the states on the other end-point.

So the initiating end-point (i.e. the client) sends a termination request to the server and waits for an acknowledgement in state FIN-WAIT-1. The server sends an acknowledgement and goes in state CLOSE_WAIT. The client goes into FIN-WAIT-2 when the acknowledgement is received and waits for an active close. When the server actively sends its own termination request, it goes into LAST-ACK and waits for an acknowledgement from the client. When the client receives the termination request from the server, it sends an acknowledgement and goes into TIME_WAIT and after some time into CLOSED. The server goes into CLOSED state once it receives the acknowledgement from the client.


You can refer to this article for a detailed explanation of tcp three-way handshake establishing/terminating a connection. And for tcpdump one, you can check below:

[c9sysdba@host2 ~]# telnet host1 14100
Connected to (
Escape character is '^]'.
telnet> quit
Connection closed.

[root@host1 ~]# tcpdump -vvv -S host host2
tcpdump: WARNING: eth0: no IPv4 address assigned
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
03:16:39.188951 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: TCP (6), length: 60) > S, cksum 0xa806 (correct), 3445765853:3445765853(0) ack 3946095098 win 5792 <mss 1460,sackOK,timestamp 854077220 860674218,nop,wscale 7> #2. host1 ack SYN package by host2, and add it by 1 as the number to identify this connection(3946095098). Then host1 send a SYN(3445765853).
03:16:41.233807 IP (tos 0x0, ttl 64, id 6650, offset 0, flags [DF], proto: TCP (6), length: 52) > F, cksum 0xdd48 (correct), 3445765854:3445765854(0) ack 3946095099 win 46 <nop,nop,timestamp 854079265 860676263> #5. host1 Ack F(3946095099), and then it send a F just as host2 did(3445765854 unchanged). 

[c9sysdba@host2 ~]# tcpdump -vvv -S host host1
tcpdump: WARNING: eth0: no IPv4 address assigned
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
03:16:39.188628 IP (tos 0x10, ttl 64, id 31059, offset 0, flags [DF], proto: TCP (6), length: 60) > S, cksum 0x265b (correct), 3946095097:3946095097(0) win 5792 <mss 1460,sackOK,timestamp 860674218 854045985,nop,wscale 7> #1. host2 send a SYN package to host1(3946095097)
03:16:39.188803 IP (tos 0x10, ttl 64, id 31060, offset 0, flags [DF], proto: TCP (6), length: 52) > ., cksum 0xed44 (correct), 3946095098:3946095098(0) ack 3445765854 win 46 <nop,nop,timestamp 860674218 854077220> #3. host2 ack the SYN sent by host1, and add 1 to identify this connection. The tcp connection is now established(3946095098 unchanged, ack 3445765854).
03:16:41.233397 IP (tos 0x10, ttl 64, id 31061, offset 0, flags [DF], proto: TCP (6), length: 52) > F, cksum 0xe546 (correct), 3946095098:3946095098(0) ack 3445765854 win 46 <nop,nop,timestamp 860676263 854077220> #4. host2 send a F(in) with a Ack, F will inform host1 that no more data needs sent(3946095098 unchanged), and ack is uded to identify the connection previously established(3445765854 unchanged)
03:16:41.233633 IP (tos 0x10, ttl 64, id 31062, offset 0, flags [DF], proto: TCP (6), length: 52) > ., cksum 0xdd48 (correct), 3946095099:3946095099(0) ack 3445765855 win 46 <nop,nop,timestamp 860676263 854079265> #6. host2 ack host1's F(3445765855), and the empty flag to identify the connection(3946095099 unchanged).

psftp through a proxy

March 5th, 2014 No comments

You may know that, we can set proxy in putty for ssh to remote host, as shown below:

putty_proxyAnd if you want to scp files from remote site to your local box, you can use putty's psftp.exe. There're many options for psftp.exe:

C:\Users\test>d:\PuTTY\psftp.exe -h
PuTTY Secure File Transfer (SFTP) client
Release 0.62
Usage: psftp [options] [user@]host
-V print version information and exit
-pgpfp print PGP key fingerprints and exit
-b file use specified batchfile
-bc output batchfile commands
-be don't stop batchfile processing if errors
-v show verbose messages
-load sessname Load settings from saved session
-l user connect with specified username
-P port connect to specified port
-pw passw login with specified password
-1 -2 force use of particular SSH protocol version
-4 -6 force use of IPv4 or IPv6
-C enable compression
-i key private key file for authentication
-noagent disable use of Pageant
-agent enable use of Pageant
-batch disable all interactive prompts

Although there's proxy setting option for putty.exe, there's no proxy setting for psftp.exe! So what should you do if you want to copy files back to local box, and there's firewall blocking you from doing this directly, and you must use a proxy?

As you may notice, there's "-load sessname" option in psftp.exe:

-load sessname Load settings from saved session

This option means that, if you have session opened by putty.exe, then you can use psftp.exe -load <session name> to copy files from remote site. For example, suppose you opened one session named mysession in putty.exe in which you set proxy there, then you can use "psftp.exe -load mysession" to copy files from remote site(no need for username/password, as you must have entered that in putty.exe session):

C:\Users\test>d:\PuTTY\psftp.exe -load mysession
Using username "root".
Remote working directory is /root
psftp> ls
Listing directory /root
drwx------ 3 ec2-user ec2-user 4096 Mar 4 09:27 .
drwxr-xr-x 3 root root 4096 Dec 10 23:47 ..
-rw------- 1 ec2-user ec2-user 388 Mar 5 05:07 .bash_history
-rw-r--r-- 1 ec2-user ec2-user 18 Sep 4 18:23 .bash_logout
-rw-r--r-- 1 ec2-user ec2-user 176 Sep 4 18:23 .bash_profile
-rw-r--r-- 1 ec2-user ec2-user 124 Sep 4 18:23 .bashrc
drwx------ 2 ec2-user ec2-user 4096 Mar 4 09:21 .ssh
psftp> help
! run a local command
bye finish your SFTP session
cd change your remote working directory
chmod change file permissions and modes
close finish your SFTP session but do not quit PSFTP
del delete files on the remote server
dir list remote files
exit finish your SFTP session
get download a file from the server to your local machine
help give help
lcd change local working directory
lpwd print local working directory
ls list remote files
mget download multiple files at once
mkdir create directories on the remote server
mput upload multiple files at once
mv move or rename file(s) on the remote server
open connect to a host
put upload a file from your local machine to the server
pwd print your remote working directory
quit finish your SFTP session
reget continue downloading files
ren move or rename file(s) on the remote server
reput continue uploading files
rm delete files on the remote server
rmdir remove directories on the remote server

Now you can get/put files as we used to now.


If you do not need proxy connecting to remote site, then you can use psftp.exe CLI to get remote files directly. For example:

d:\PuTTY\psftp.exe root@ -i d:\PuTTY\aws.ppk -b d:\PuTTY\script.scr -bc -be -v

And in d:\PuTTY\script.scr is script for put/get files:

cd /backup
lcd c:\
mget *.tar.gz

Categories: IT Architecture, Linux, Systems Tags: ,

checking MTU or Jumbo Frame settings with ping

February 14th, 2014 No comments

You may set your linux box's MTU to jumbo frame sized 9000 bytes or larger, but if the switch your box connected to does not have jumbo frame enabled, then your linux box may met problems when sending & receiving packets.

So how can we get an idea of whether Jumbo Frame enabled on switch or linux box?

Of course you can log on switch and check, but we can also verify this from linux box that connects to switch.

On linux box, you can see the MTU settings of each interface using ifconfig:

[root@centos-doxer ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 08:00:27:3F:C5:08
RX packets:50502 errors:0 dropped:0 overruns:0 frame:0
TX packets:4579 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9835512 (9.3 MiB) TX bytes:1787223 (1.7 MiB)
Base address:0xd010 Memory:f0000000-f0020000

As stated above, 9000 here doesn't mean that Jumbo Frame enabled on your box to switch. As you can verify with below command:

[root@testbox ~]# ping -c 2 -M do -s 1472 testbox2
PING ( 1472(1500) bytes of data. #so here 1500 bytes go through the network
1480 bytes from ( icmp_seq=1 ttl=252 time=0.319 ms
1480 bytes from ( icmp_seq=2 ttl=252 time=0.372 ms

--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.319/0.345/0.372/0.032 ms
[root@testbox ~]#
[root@testbox ~]#
[root@testbox ~]# ping -c 2 -M do -s 1473 testbox2
PING ( 1473(1501) bytes of data. #so here 1501 bytes can not go through. From here we can see that MTU for this box is 1500, although ifconfig says it's 9000
From ( icmp_seq=1 Frag needed and DF set (mtu = 1500)
From ( icmp_seq=1 Frag needed and DF set (mtu = 1500)

--- ping statistics ---
0 packets transmitted, 0 received, +2 errors

Also, if your the switch is Cisco one, you can verify whether the switch port connecting server has enabled jumbo frame or not by sniffing CDP (Cisco discover protocol) packet. Here's one example:

-bash-4.1# tcpdump -i eth0 -nn -v -c 1 ether[20:2] == 0x2000 #ether[20:2] == 0x2000 means capture only packets that have a 2 byte value of hex 2000 starting at byte 20
tcpdump: WARNING: eth0: no IPv4 address assigned
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
03:44:14.221022 CDPv2, ttl: 180s, checksum: 692 (unverified), length 287
Device-ID (0x01), length: 46 bytes: ''
Address (0x02), length: 13 bytes: IPv4 (1)
Port-ID (0x03), length: 16 bytes: 'Ethernet111/1/12'
Capability (0x04), length: 4 bytes: (0x00000228): L2 Switch, IGMP snooping
Version String (0x05), length: 66 bytes:
Cisco Nexus Operating System (NX-OS) Software, Version 5.2(1)N1(4)
Platform (0x06), length: 11 bytes: 'N5K-C5548UP'
Native VLAN ID (0x0a), length: 2 bytes: 123
AVVID trust bitmap (0x12), length: 1 byte: 0x00
AVVID untrusted ports CoS (0x13), length: 1 byte: 0x00
Duplex (0x0b), length: 1 byte: full
MTU (0x11), length: 4 bytes: 1500 bytes #so here MTU size was set to 1500 bytes
System Name (0x14), length: 18 bytes: 'ucf-c1z3-swi-5k01b'
System Object ID (not decoded) (0x15), length: 14 bytes:
0x0000: 060c 2b06 0104 0109 0c03 0103 883c
Management Addresses (0x16), length: 13 bytes: IPv4 (1)
Physical Location (0x17), length: 13 bytes: 0x00/snmplocation
1 packets captured
1 packets received by filter
0 packets dropped by kernel
110 packets dropped by interface


  1. As for "-M do" parameter for ping, you may refer to man ping for more info. And as for DF(don't fragment) and Path MTU Discovery mentioned in the manpage, you may read more on and
  2. Here's more on tcpdump tips and
  3. Maximum packet size is the MTU plus the data-link header length. Packets are not always transmitted at the Maximum packet size. As we can see from output of iptraf -z eth0.
  4. Here's more about MTU:

The link layer, which is typically Ethernet, sends information into the network as a series of frames. Even though the layers above may have pieces of information much larger than the frame size, the link layer breaks everything up into frames(which in payload encloses IP packet such as TCP/UDP/ICMP) to send them over the network. This maximum size of data in a frame is known as the maximum transfer unit (MTU). You can use network configuration tools such as ip or ifconfig to set the MTU.

The size of the MTU has a direct impact on the efficiency of the network. Each frame in the link layer has a small header, so using a large MTU increases the ratio of user data to overhead (header). When using a large MTU, however, each frame of data has a higher chance of being corrupted or dropped. For clean physical links, a high MTU usually leads to better performance because it requires less overhead; for noisy links, however, a smaller MTU may actually enhance performance because less data has to be re-sent when a single frame is corrupted.

Here's one image of layers of network frames:



Oracle VM operations – poweron, poweroff, status, stat -r

January 27th, 2014 No comments

Here's the script:

#1.OVM must be running before operations status before running poweroff or poweron
use Net::SSH::Perl;
$host = $ARGV[0];
$operation = $ARGV[1];
$user = 'root';
$password = 'password';

if($host eq "help") {
print "$0 OVM-name status|poweron|poweroff|stat-r\n";

$ssh = Net::SSH::Perl->new($host);

if($operation eq "status") {
($stdout,$stderr,$exit) = $ssh->cmd("ovm -uadmin -pwelcome1 vm ls|grep -v VM_test");
select $host_fd;
print $stdout;
close $host_fd;
} elsif($operation eq "poweroff") {
if($_ =~ "Server_Pool|OVM|Powered") {
if($_ =~ /(.*?)\s+([0-9]{1,})\s+([0-9]{1,})\s+([0-9]{1,})\s+([a-zA-Z]{1,})\s+(.*)/){
$ssh->cmd("ovm -uadmin -pwelcome1 vm poweroff -n $1 -s $6");
sleep 12;
} elsif($operation eq "poweron") {
if($_ =~ "Server_Pool|OVM|Running") {
if($_ =~ /(.*?)\s+([0-9]{1,})\s+([0-9]{1,})\s+([0-9]{1,})\s+([a-zA-Z]{1,})\s+Off(.*)/){
$ssh->cmd("ovm -uadmin -pwelcome1 vm poweron -n $1 -s $6");
#print "ovm -uadmin -pwelcome1 vm poweron -n $1 -s $6";
sleep 20;
} elsif($operation eq "stat-r") {
if($_ =~ /(.*?)\s+([0-9]{1,})\s+([0-9]{1,})\s+([0-9]{1,})\s+(Shutting\sDown|Initializing)\s+(.*)/){
#print "ovm -uadmin -pwelcome1 vm stat -r -n $1 -s $6";
$ssh->cmd("ovm -uadmin -pwelcome1 vm stat -r -n $1 -s $6");
sleep 1;

You can use the following to make the script run in parallel:

for i in <all OVMs>;do (./ $i status &);done

avoid putty ssh connection sever or disconnect

January 17th, 2014 2 comments

After sometime, ssh will disconnect itself. If you want to avoid this, you can try run the following command:

while [ 1 ];do echo hi;sleep 60;done &

This will print message "hi" every 60 seconds on the standard output.


You can also set some parameters in /etc/ssh/sshd_config, you can refer to

“Include snapshots” made NFS shares from ZFS appliance shrinking

January 17th, 2014 No comments

Today I met one weird issue when checking one NFS share mounted from ZFS appliance. The NFS filesystem mounted on client was shrinking when I removed files as the space on that filesystem was getting low. But what made me confused was that the filesystem's size would getting lower! Shouldn't the free space getting larger and the size keep unchanged?

After some debugging, I found that this was caused by ZFS appliance shares' "Include snapshots". When I uncheck "Include snapshots", the issue was gone!


Categories: Hardware, NAS, Storage Tags:

resolved – ESXi Failed to lock the file

January 13th, 2014 No comments

When I was power on one VM in ESXi, one error occurred:

An error was received from the ESX host while powering on VM doxer-test.
Cannot open the disk '/vmfs/volumes/4726d591-9c3bdf6c/doxer-test/doxer-test_1.vmdk' or one of the snapshot disks it depends on.
Failed to lock the file

And also:

unable to access file since it is locked

This apparently was caused by some storage issue. I firstly googled and found most of the posts were telling stories about ESXi working mechanism, and I tried some of them but with no luck.

Then I thought of that our storage datastore was using NFS/ZFS, and NFS has file lock issue as you know. So I mount the nfs share which datastore was using and removed one file named lck-c30d000000000000. After this, the VM booted up successfully! (or we can log on ESXi host, and remove lock file there also)

install java jdk on linux

January 7th, 2014 No comments

Here's the steps if you want to install java on linux:

wget <path to jre-7u25-linux-x64.rpm> -P /tmp
rpm -ivh /tmp/jre-7u25-linux-x64.rpm
mkdir -p /root/.mozilla/plugins
rm -f /root/.mozilla/plugins/
ln -s /usr/java/jre1.7.0_25/lib/amd64/ /root/.mozilla/plugins/
ll /root/.mozilla/plugins/


You'll need to install jre i386 version if your firefox browser is 32 bits. And you can install jre6 from here. You should download packages like "jre-6u33-linux-i586-rpm.bin" and run chmod +x jre-6u33-linux-i586-rpm.bin && ./jre-6u33-linux-i586-rpm.bin after that. You may locate /usr/java/jre1.6.0_33/bin/javaws for opening remote console when prompted.

add another root user and set password

January 7th, 2014 No comments

In linux, do the following to add another root user and set password:

mkdir -p /home/root2
useradd -u 0 -o -g root -G root -s /bin/bash -d /home/root2 root2
echo password | passwd --stdin root2

Categories: IT Architecture, Linux, Systems Tags:

oracle database tips – management

December 30th, 2013 No comments

netca #su - grid, can change listener port with this
oifcfg #Oracle Interface Configuration Tool, used for adding new public/private interfaces
appvipcfg #appvipcfg create -network=1 -ip -vipname httpd-vip -user=root
lsnrctl #su - grid first; change_password, save_config
asmcmd -p #asmcmd -p ls -l; ls --permission; lsdg; find -t datafile DATA/ sys*;pwd;lsct<asm client>;help cp

v$parameter(for current session), v$system_parameter(for new sessions), v$spparameter

sqlnet.ora #su - grid, Profiles, define sequence of naming method;access control(through netmgr)

orapwd #sync dictionary/password file after upgrading oracle db
/etc/oratab #which DBs are installed, and control whether dbstart/dbshut is used to start/stop DB

pmon registers to listener, alter system register. PMON worked with dispatcher, shared server architecture; dedicated server architecture(for restricted operations)

DB modes

startup nomount #read spfile or init.ora and start up oracle memory structures/background processes. instance is started but db is not associate with instance. may recreate control files in this mode.
alter database mount #the instance mounts the database. Control files(contains name of datafiles and redo logs) are read, but datafiles and redo logs still not open
startup force #restart instance(first shutdown abort then startup. if not shutdown properly before and cannot startup now)
startup mount

alter database open [read only]; #datafiles and redo logs open, ready for use

startup restrict #only DBA can use the DB
alter system quiesce restrict #The activities of other users continue until they become inactive
alter system unquiesce
shutdown normal(all connections quit)/immediate(rollback first)/transactional(after commit)/abort

SQL> select open_mode from v$database; #read write

segment, extents, blocks

Each segment is a single instance of a table,partition,cluster, index,or temporary or undo segment. So,for example,a table with two indexes is implemented as three segments in the schema.

As data is added to Oracle, it will first fill the blocks in the allocated extents and once those extents are full,new extents can be added to the segment as long as space allows.


emctl status/start/stop dbconsole(management agent)/agent/oms
emca -config dbcontrol db -cluster #EM configuration assistant
emca -reconfig dbcontrol -cluster #reconfigure the Console to start on a different node (or on more than one if you desire)
emca -displayConfig dbcontrol -cluster #current config, Management Server's location
emca -addInstdb/-deleteInst

AWR(Automatic Workload Repository) is

used for storing database statistics that are used for performance tuning). A set of tables is created for the AWR under the SYS schema in the SYSAUX tablespace

MMON(Manageability Monitor) captures base statistics every 60 minutes. snapshots and in memory statistics are called AWR(ADDM runs automatically after each AWR snapshot)

MMNL(Manageability Monitor Light) performing tasks related to the Active Session History (ASH), ASH refresh every second,record what the sessions are waiting for

ADDM(automatic database diagnostic monitor) diagnoses AWR report and suggest potential solutions
SQL>select * from v$sql where cpu_time>200000 #20s
explain plan for select count(*) from lineitem; #and then issue select * from table(DBMS_XPLAN.DISPLAY);
show parameter dump_dest; #/u01/app/oracle/diag/rdbms/devdb/devdb1/{trace,alert}
$ORACLE_BASE/cfgtoollogs/dbca #for DBCA trace and log files


adump,  audit files
dpdump, Data Pump Files
hdump, High availability trace files
pfile, Initialization file;


$GRID_HOME/log/hostname is $LOG_HOME


ohasd/ #ohasd.bin's booting log


ADR(automatical diagnotics repository)


adrci #su - oracle/grid, manage alert/trace files
select name, value from gv$diag_info; #ADR  directory structure for all instances, e.g. Diag Enabled/ADR Base/ADR Home/Diag Trace/Diag Alert/Health Monitor/Active Problem Count
adrci exec="show home";
show home;
set base xxx;<show parameter diagnostic_dest>
set homepath diag/rdbms/orcl6/orcl6;<or export ADR_HOME='xxx'>
show alert -p "message_text like '%start%'";
show alert -tail 5 -f;
show incident;
ips create package incident <incident id>;
ips generate package 1 in /tmp;
show tracefile;
show tracefile -I <incident id>;


show trace <tracefile name from show tracefile -I, like PROD1_lmhb_7430_i27729.trc>;
HM(health check)


SQL> select name from v$hm_check; #health check names
SQL> exec dbms_hm.run_check('DB Structure Integrity Check');
adrci> show hm_run;  #to check HM details

adrci> create report hm_run <RUN_NAME from show hm_run>;

adrci> show report hm_run <RUN_NAME from show hm_run>;

SQL>  select description, damage_description from v$hm_finding where run_id = 62; #run_id is from show hm_run


DBMS_MONITOR and trace(sql_trace is deprecated)


DBMS_MONITOR traces for a specific session(session_trace_enable or database_trace_enable for all sessions), module(serv_mod_act_trace_enable), action(serv_mod_act_trace_enable), or client identifier


SQL> SELECT sid, serial# FROM v$session WHERE username = 'TPCC'; # you may need to join V$SESSION to other dynamic performance views, such as V$SQL, to identify the session of interest
SQL> EXECUTE dbms_monitor.session_trace_enable (session_id=>164); #enable tracing for a specific session, the result will be appended to the current trace file
SQL> EXECUTE dbms_monitor.session_trace_disable (session_id=>164); #disable tracing for the session
tkprof #view the activities of a session with finer granularity after trace files generated
Patch types


Interim patch #cannot wait until the next patch set to receive the product fix. use opatch to install
CPU(Critical Patch Update, overall release of security fixes each quarter)
PS(Patch Set, minor version upgrade, -> use OUI<Oracle Universal Installer> to apply)
PSU(Patch Set Updates, cumulative patches,  low risk and RAC rolling installable, include the latest CPU)
BP(Bundle Patch,  BP for exadata)
Major release update(11.1 -> 11.2)
opatch query -is_rolling_patch <patch path/> #four modes: all node patch mode/rolling patch mode/minumum downtime patch mode/local patch mode
opatch query <patch path> -all
$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME #ensure inventory is not corrupted and see what patches have been applied #su - oracle/grid
/etc/oraInst.loc #inventory_loc(/u01/app/oraInventory)
/u01/app/oraInventory/ContentsXML/inventory.xml #stores oracle software products & their oracle_homes location
Opatch example
srvctl stop home -o $ORACLE_HOME -s /var/tmp/home_stop.txt -n node1 #su - oracle
srvctl stop home -o $ORACLE_HOME -s /var/tmp/home_stop_grid.txt -n node1 #su - grid
$GRID_HOME/crs/install/ -unlock -crshome $GRID_HOME #su - root, Unlock CRS home
/u01/app/crs/OPatch/opatch apply #su - grid and patch
$GRID_HOME/crs/install/ -patch #su - root, boot up local HAS that has been patched
crsctl check crs #check status, or crsctl check cluster -all. -n node1 for single node
srvctl start home -o $ORACLE_HOME -s /var/tmp/home_stop_grid.txt -n node1 #su - grid
srvctl start home -o $ORACLE_HOME -s /var/tmp/home_stop.txt -n node1 #su - oracle
now patch node2



server pools #Policy-managed databases, add/remove instance to cluster automatically

$GRID_HOME/racg/usrco #server side callouts

alter diskgroup diskgroupName online disk diskName #re-activate asm disk in DISK_REPAIR_TIME

select name,value from v$asm_attribute where group_number=3 and name not like 'template%'; #disk group attributes

Add RAC nodes


network/user ids/asmlib and asm module/disks
$ORACLE_HOME/bin/cluvfy stage -post hwos -n london4 -verbose #su - grid
cluvfy stage -pre nodeadd -n london4  -fixup -fixupdir /tmp
$GRID_HOME/oui/bin/ -silent "CLUSTER_NEW_NODES={london4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={london4-vip}"  #with or without GNS, then execute scripts prompted
cluvfy stage -post nodeadd -n london4  #done for adding node to grid
$ORACLE_HOME/oui/bin/ -silent "CLUSTER_NEW_NODES={london4}" #add RDBMS software, then execute scripts prompted


Remove RAC nodes
ensure that no database instance or other custom resource type uses that nodeocrconfig -manualbackupdbca #remove database instance from a node.srvctl config listener -a #detailed listener configurationsrvctl disable listener -l LISTENER -n london2srvctl stop listener -l LISTENER -n london2./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES={london2} -local #on node2(to be deleted), pay attention to -local


$ORACLE_HOME/deinstall/deinstall -local #remove RDBMS home. if RDBMS binaries was installed on shared storage, then $ORACLE_HOME/oui/bin/runInstaller -detachHome ORACLE_HOME=$ORACLE_HOME
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={london1}"  #on node1
 $GRID_HOME/crs/install/ -deconfig –force # update the OCR and remove the node from Grid, more on
crsctl delete node -n london2  #su - root, on node1
/u01/app/crs/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=london2" CRS=TRUE -local  #on node2, remove grid software
./deinstall -local


./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME"CLUSTER_NODES={london1,london3}" CRS=TRUE #on node1

RAC startup/shutdown

crsctl start crs #su - root, on each node.crsctl start cluster will boot up daemons not running.  -n node1 to specified one node

srvctl start asm -n node1/node2 #su - oracle

srvctl start database -d devdb #su - oracle

ohasd(oracle restart, will respawn) will startup crs(through reading contents of /etc/oracle/scls_scr/<hostname>/root/ohasdrun, more on oracle restart can startup ons/eons daemons: srvctl add/start ons/eons on single instance DB

crs(clusterware OR grid infrastructure) manages asm/db/vip/listener/ons<oracle notification services, which send out FAN(fast application notification) events> cluster resource(startup/stop/monitor/failover).

Oracle Cluster Registry

ocrconfig [-local] -showbackup #su - grid. Configuration tool for, saves info for cluster resource, so that crs will use this for managing cluster resources. OLR(Oracle Local Registry) mainly stores info about OHASD locally.

ocrdump /var/tmp/ocrdump.txt #run as root

ocrcheck #run as grid/oracle/root

restore OLR

ocrconfig -local -restore /u01/app/crs/cdata/london1/backup_20091010_211006.olr #init 2 first. check log in $GRID_HOME/log/hostname/client
#if OLR file is lost
touch /u01/app/crs/cdata/london1.olr #init 2 first
chown grid:oinstall london1.olr
ocrconfig -local -restore /u01/app/crs/cdata/london1/backup_20100625_085111.olr

restore OCR

crsctl stop cluster –all –f # stop the Clusterware stack on all nodes
crsctl start crs –excl # start the Clusterware stack in exclusive mode( start the required background processes that make up a local ASM instance). use crsctl disable crs to disable the automatic starting of the Clusterware stack. reboot the node. Then crsctl start crs –excl again.
If the diskgroup containing the voting files was lost # create a new one with exactly the same name and mount it. set the ASM compatibility to 11.2 for the diskgroup(alter diskgroup OCRVOTE set attribute 'compatible.asm'='11.2'). execute /etc/init.d/oracleasm scandisks on the nodes for which you did not create the disk
ocrconfig -restore backup_file_name #log files under $GRID_HOME/log/hostname/client. if errors saying crs is running, then crsctl stop res ora.crsd –init
crsctl stop crs #at this time, ohasdrun's content will be 'stop', although init.ohasd is still running
crsctl start crs
Start the cluster on the remaining nodes. If you have disabled the automatic start of the Clusterware stack in Step 2, re-enable it using crsctl enable crs

ocssd #cluster synchronization server

control membership of RAC nodes(join/leave). update voting disk every second with the status of node itself. can work with thirdparty HA like IBM hacmp except for clusterware

GPnP #grid plug and play

gpnptool get #su - grid first, info about public/private NIC/voting disk path can be seen

CTSS #Cluster time synchronization (CTSS)

observer<if ntp>/active mode<if not ntp>

GNS #grid naming service

manages VIP instead of using name server

mDNS, multicast DNS

restore voting disk

if OCR corrupted, then recover OCR first

crsctl stop cluster –all –f # stop the Clusterware stack on all nodes
crsctl start crs –excl # start the Clusterware stack in exclusive mode( start the required background processes that make up a local ASM instance). use crsctl disable crs to disable the automatic starting of the Clusterware stack. reboot the node. Then crsctl start crs –excl again.
If the diskgroup containing the voting files was lost # create a new one with exactly the same name and mount it. set the ASM compatibility to 11.2 for the diskgroup(alter diskgroup OCRVOTE set attribute 'compatible.asm'='11.2'). execute /etc/init.d/oracleasm scandisks on the nodes for which you did not create the disk. compatible.rdbms = 11.2  access_control.enabled = true
crsctl replace votedisk + disk_group_name
crsctl stop crs
crsctl start crs
Start the cluster on the remaining nodes. If you have disabled the automatic start of the Clusterware stack in Step 2, re-enable it using crsctl enable crs

ASM permission

alter diskgroup data set ownership owner = 'orareport' for file '+DATA/PROD/DATAFILE/users.259.679156903';

alter diskgroup data set permission owner = read write, group = read only, other = none  for file '+DATA/FIN/datafile/example.279.723575017';

select name, permissions,user_number,usergroup_number from v$asm_file f natural join v$asm_alias t where group_number = 3 and name = 'EXAMPLE.279.723575017' ;

asm template

select redundancy,stripe,name,primary_region,mirror_region from v$asm_template  where group_number = 3; #stripe has coarse/fine types.variable extent sizes can be used for coarse striping, the first 20,000 extents always equal the allocation unit (AU) size. The next 20,000 extents are 4 times the size of the AU. fine striping is used only for control files, online redo logs, and flashback logs. The stripe size is 128KB. Also by default, eight stripes are created for each file; therefore, the optimum number of disks in a disk group is a multiple of eight.

alter diskgroup data add template allhot attributes (hot);
create tablespace hottbs datafile '+DATA(allhot)' size 10M;
select bytes,type,redundancy,primary_region,mirror_region, hot_reads,hot_writes,cold_reads,cold_writes from v$asm_file where file_number = 284;

add/remove asm disk

For device-mapper-multipath

1.format the underlying block device, usually /dev/sd* on one node

2.With the device partitioned, the administrator can use either partprobe or kpartx to re-read the partition table on the other cluster nodes

3.A restart of the multipath daemon (“service multipathd restart”) should show the new partition in /dev/mapper(may use multipath -f first)

4.With the new block devices detected on all nodes, you could use ASMLib to mark the disk as an ASM disk on one node of the cluster

alter diskgroup data drop disk 'DATA11' rebalance power 3;

alter DISKGROUP DGNORM1 add DISK '/dev/rdsk/disk5' name disk5, '/dev/rdsk/disk6' name disk6;
alter diskgroup DG_DEST_DF add FAILGROUP FailgroupB disk '/asmdisks/asmdiskB48';
select group_number,name from v$asm_diskgroup;
select DISK_NUMBER, name, failgroup, group_number from v$asm_disk where group_number=3 order by name; #or order by 2
alter diskgroup DG_DEST_DF add FAILGROUP FailgroupA disk ‘/asmdisks/asmdiskA28′;
alter diskgroup DG_SRC_DF drop disk asmdiskA28;

Then  check V$ASM_OPERATION for information about a time remaining estimate and later physically remove the disk(It is safe to remove the ASM disk physically from the cluster only when the HEADER_STATUS of the V$ASM_DISK view shows “FORMER” for the disk you dropped.)

SQL> alter diskgroup data add disk 'ORCL:NEWSAN01', 'ORCL:NEWSAN02', 'ORCL:NEWSAN03',
drop disk 'ORCL:OLDSAN01', 'ORCL:NEWSAN02', 'ORCL:NEWSAN03' rebalance power 11; #parallel and one ebalance

kfed read /dev/oracleasm/disks/VOL1 #show ASM header info, su - grid

Startup status

STARTUP FORCE/nomount(Starts the ASM instance but does not mount any disk groups)

mount<or open,mounts all disks registered in the ASM_DISKGROUPS initialization parameter>

ASM operations

show parameter ASM_DISKGROUPS;
select name,state from v$asm_diskgroup ; #su - grid

SQL> select * from v$asm_disks;
SQL> select type from V$ASM_FILE group by TYPE #having amount > 300

select db_name,status,instance_name from v$asm_client; #connected clients. when OCR is stored in ASM, the asm instance itself will be its client

alter diskgroup DATA check;  #repair

srvctl start diskgroup -g diskgroupName #rather than using 'alter diskgroup mount all'

/etc/init.d/oracleasm createdisk VOL4 /dev/loop1 #dd and losetup first

create diskgroup FILEDG external redundancy disk 'ORCL:VOL4';

SQL> show parameter asm_diskstring; #search path for candidate disks

SQL> create diskgroup DGEXT1 external redundancy disk '/dev/rdsk1/disk1';
SQL> create diskgroup DGNORM1 normal redundancy disk FAILGROUP controller1 DISK '/dev/rdsk/disk1' name disk1, '/dev/rdsk/disk2' name disk2 FAILGROUP controller2 DISK '/dev/rdsk/disk3' name disk3, '/dev/rdsk/disk4' name disk4; #mirroring based on AU(1M or Exadata's 4M). In Normal,one mirror for each extent(2 fail groups at least); In high, two mirrors for each extent(3 fail groups at least). For each extent written to disk, another extent will be written into another failure group to provide redundancy.

asm options

compatible.asm, compatible.rdbms, compatible.advm, au_size, sector_size, disk_repair_time, access_control.enabled, access_control.umask

CRSCTL resource

crsctl status resource -t #crsctl status resource -h for help. crs_stat -t

crsctl status resource -t -init #check status of ohasd stack

crsctl status resource ora.devdb.db -p #get detail info about one resource(for example dependency relationships), resource profile

crsctl add resource TEST.db -type cluster_resource -file TEST.db.config #register the resource in Grid. TEST.db.config is output from -p

crsctl getperm resource TEST.db  #resource permission

crsctl setperm resource TEST.db -o oracle #oracle can startup this resource after this

crsctl delete resource ora.gsd #crsctl start/stop resource <resource name>

/u01/app/11.2.0/grid/bin/scriptagent #used to protect user defined resources, start/stop/check/clean/abort/<relocate>(combined with user action scripts)


srvctl config scan #get scan info

srvctl status database -d devdb ##instance running status

srvctl config scan_listener #srvctl stop scan_listener , srvctl modify scan_listener -p  1526, change scan listener Endpoint(port); srvctl start scan_listener ; srvctl status scan_listener

show parameter local_listener/remote_listener;

#add scan ip

srvctl stop scan_listener
srvctl stop scan #scan vip
srvctl status scan_listener
srvctl status scan
srvctl modify scan -n #su - root
srvctl config scan
srvctl modify scan_listener -u #Update SCAN listeners to match the number of SCAN VIPs
srvctl config scan_listener
srvctl start scan
srvctl start scan_listener

SRVCTL service

srvctl add service –d PROD –s REPORTING –r PROD3 –a PROD1 –P BASIC –e SESSION # add a service named reporting to your four-node administrator-managed database that normally uses the third node, but can alternatively run on the first node
srvctl start service –d PROD –s REPORTING –i PROD1
srvctl config service -d PROD -s reporting #check status
srvctl status service -d PROD -s reporting
srvctl relocate service -d PROD -s reporting -i PROD1 -t PROD3 #different parameters for admin managed/policy managed



###Backup and Recovery
imp/exp scott/tiger file=scott.exp #logical export, client based. run @?/rdbms/admin/catexp.sql first
sql*loader(non-oracle DB) #sqlldr
expdp, impdp #data pump, can not be used when db is read only
expdp/impdp help=y
SQL>create directory backup_dir as '/backup/'; #default is dpump_dir
SQL>grant read,write on directory backup_dir to scott;
expdp scott/tiger dumpfile=scott.dmp directory=backup_dir (tables=scott.emp); #imp_full_database role
expdp \”/ as sysdba \” schemas=scott dumpfile=scott.dmp directory=backup_dir; #full/schemas/tables/tablespaces/transport_tablespaces<only metadata>
expdp system/manager DUMPFILE=expdat.dmp FULL=y LOGFILE=export.log COMPRESSION=ALL
expdp \"/ as sysdba\" schemas=SH ESTIMATE_ONLY=y ESTIMATE=BLOCKS;
expdp sh/sh parfile=exp.par
QUERY=customers:"where cust_id=1"
create user "SHNEW" identified by "Testpass";
impdp \"/ as sysdba\" dumpfile=sh.dmp directory=backup_dir SCHEMAS=SH REMAP_SCHEMA=SH:SHNEW
impdp \"/ as sysdba\" dumpfile=sh.dmp directory=backup_dir REMAP_SCHEMA=SH:SHNEW TABLES=SH.PRODUCTS TABLE_EXISTS_ACTION=SKIP
all use space in undotbs<id>
before submit, the status is active, and use a small portion of expired data, some unexpired will become expired. most of the space used will be the free space in undo tablespace
after submit, active will become unexpired, and expired won't change
SQL>select status,sum(bytes/1024/1024) from dba_undo_extents where tablespace_name='UNDOTBS1' group by status;
SQL>select tablespace_name,sum(bytes/1024/1024) from dba_free_space where tablespace_name='UNDOTBS1' group by tablespace_name; #undo free space, dba_data_files
undo data is subsequently logged as redo logs(when commit, lgw0 will write data into redo log; then the changes to the data files are in the buffer and can be written out at a later time<dbw0 write to datafile>)
SQL>show parameter undo_retention;
guarantee 900 seconds of  data(may exceed 900).if auto-extend and tablespace is used up,autoextend;if reached maxsize,then unexpired will be override(only when nogurantee; sql will fail if is  guarantee)
if fixed size, undo_retention is ignored, oracle will change maximum time automaticallynoguarentee #default, unexpired can also be overrided, so may not guarantee <undo_retention> seconds of read consistency
SQL>alter tablespace xxx retention guarantee #to make sure long-running queries will succeed. if tablespace is used up, then sql will fail
SQL>select tablespace_name, retention from dba_tablespaces; #guarantee or noguarantee
SQL>select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time, to_char(end_time, 'DD-MON-RR HH24:MI') end_time, tuned_undoretention from v$undostat order by end_time; #retention time calculated by system every 10 minutes; data more than 4 days ago are stored in BA_HIST_UNDOSTATrollback #delete from ...; rollbackSQL>flashback table emp to before drop;


as opposite to undo, redo is for recovering already commited data


RMAN enable archivelog mode
shutdown immediate;
startup mount
back up your database
RMAN> backup as copy database tag="db_cold_090721";
RMAN> list copy;
Update init params as needed:
log_archive_dest || db_recovery_file_dest
log_archive_dest_n || db_recovery_file_dest
alter database archivelog NOTE: apparently, you can only do this from sqlplus, not rman
alter database open
SQL> select LOG_MODE from v$database;
SQL> archive log list #su - oracle. some configurations, and whether archive log mode is enabled or not; archive log destination

RMAN & cold backup

RMAN Offline backup: This performs an immediate or normal shutdown, followed by a startup mount. This does not require archivelog mode.
RMAN Online backup:For this to be done, the database must be open and in archivelog mode.

startup nomount/restore control files/mount/recover/open
SQL>recover database until cancal; #until change 1234567; until time '2004-04-15:14:33:00', recovery from a hot backup.
alter database open resetlogs #restore using control file/will reset log sequence number to 1<v$log, v$logfile> and throw away all archive log

alter tablespace <name> begin backup #or alter database begin backup
[cp or ocopy (windows)] #no need to backup online redo logs(but you should archive the current redo logs and back those up)
alter tablespace <name> end backup #or alter database end backup, select * from v$backup<scn>

using recovery catalog instead of target database control file
SQL>create tablespace tbs_rman datafile '+DATA' size 200m autoextend on;
SQL>create user rman identified by rman temporary tablespace temp default tablespace tbs_rman quota unlimited on tbs_rman;
SQL>grant recovery_catalog_owner to rman;
RMAN> connect catalog rman/rman@devdb --connect to recovery catalog
RMAN> create catalog tablespace tbs_rman; --create recovery catalog
rman target sys/Te\$tpass@devdb catalog rman/rman@devdb #connect to target database and recovery catalog
RMAN > connect catalog rman/rman@devdb
RMAN > connect target sys/Te\$tpass@devdb
RMAN> register database; #register target database to recovery catalog
Commands frequently used
rman target /
RMAN> show all; #rman configurations
rman> backup current controlfile;
RMAN> backup database plus archivelog;
SQL> alter database backup controlfile to '/u01/app/oracle/control.bak' [reuse]; #binarybackup, reuse for override
SQL> alter database backup controlfile to trace as '/u01/app/oracle/control.trace'; #text backup
rman> backup tablespace new_tbs; #then datafile lost and error occurs while querying/create table
SQL>col NAME for a60
SQL> select FILE#, STATUS, NAME from v$datafile;
rman>restore datafile <10>;
rman>recover datafile 10(apply redo log);
SQL>alter tablespace new_tbs online;
RMAN>crosscheck backup; #checks that the RMAN catalog is in sync with the backup files on disk or the media management catalog. Missing backups will be marked as “Expired.”
alter system set max_dump_file_size=1000 scope=both;
show parameter log_archive_duplex_dest; #not have a single point of failure
show parameter archive_dest;
select dest_name,status,destination from V$ARCHIVE_DEST;SQL> show parameter log_archive_format;
SQL> alter system set log_archive_dest_1='location=+DA_SLCM07' scope=both; #After this, you can remove old archive log on the filesystem(usually under $ORACLE_HOME/dbs). more on and> alter system set log_archive_dest_1 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST'; #use setting of DB_RECOVERY_FILE_DEST
SQL> show parameter db_recovery_file_dest #alter system set db_recovery_file_dest_size=9G scope=both; #startup mount first. <Flash Recovery Area>RMAN> connect target sys/Testpass;
RMAN> delete noprompt ARCHIVELOG UNTIL TIME 'sysdate-4';
RMAN>DELETE noprompt BACKUP COMPLETED BEFORE 'sysdate-2';RMAN> list incarnation; #version
RMAN> crosscheck copy;
RMAN> delete expired copy; --remove expired copy, delete archivelog all is for remove all
RMAN> resync catalog
RMAN> list backup summary;
RMAN> list backup by file; #list backup sets and their detailed files and pieces
RMAN> report obsolete; # report on backups that are no longer needed because they exceed the retention policy
RMAN> restore database preview summary; #preview the restore and see the summary for the restore
RMAN> list backupset tag=tbs;
RMAN> configure controlfile autobackup; #backup control file automatically
RMAN> configure retention policy to redundancy 4; #to start deleting backups after four backups have been taken
RMAN> configure retention policy to recovery window of 15 days; # make point-in-time recovery possible up to the last 15 days and to make backups taken more than 14 days ago obsolete
RMAN> host 'echo "start `date`"';RMAN > validate database;
RMAN > validate backupset 7;
RMAN > validate datafile 10;
RMAN > backup validate database archive log all;
RMAN > restore database validate;
RMAN > restore archive log all validate;SQL> select name,completion_time from v$archived_log;

RMAN> crosscheck archivelog all;
RMAN> sql 'alter system switch logfile'; #forcely write redo log to disk(only for current thread<instance>)

SQL> alter system archive log current #This command ensures that all redo logs have been archived and it will wait for the archive to complete(on RAC, best practise. slow than switch logfile). later dbwr will write checkpoint(SGA dirty data) to datafile/control file, and update SCN
RMAN> list archivelog all;
SQL>select dbms_flashback.get_system_change_number from dual; #current SCN. select CURRENT_SCN from v$database;

rman>list failure; #Data recovery advisor, not supported on RAC
rman>advise failure <362> details;
rman>repair failure;

alter system checkpoint #sync redo and datafile,
drop logfile group n #remove redo log

Backup policies

backup policies

SQL> alter database set standby database to maximize protection/availability/performance; #data guard modes
SQL> select force_logging from v$database;
alter database [NO] force logging; #forces all changes to be logged even if nologging. NOLOGGING/LOGGING(default)/FORCE LOGGING
SQL> select tablespace_name,logging,force_logging from dba_tablespaces;
select table_name,logging from user_tables; #object level
alter table tb_a nologging;
RMAN scripts
RMAN> run{ #full backup
2> allocate channel ch1 device type disk;
3> backup as compressed backupset
4> database plus archivelog delete input #remove archived log after backed up
5> format='/u01/app/oracle/whole_%d_%U'
6> tag='whole_bak';
7> release channel ch1;}
RMAN> run{ #0-level incremental backup(differential by default)
2> allocate channel ch1 device type disk;
3> allocate channel ch2 device type disk;
4> backup as compressed backupset
5> incremental level 0
6> database plus archivelog delete input
7> format='/u01/app/oracle/inc_0_%d_%U'
8> tag='Inc_0';
9> release channel ch1;
10> release channel ch2;}RMAN> run{ #1-level  incremental backup(differential by default)
2> allocate channel ch1 device type disk;
3> allocate channel ch2 device type disk;
4> backup as compressed backupset
5> incremental level 1 database
6> format='/u01/app/oracle/Inc_1_%d_%U'
7> tag='Inc_1';
8> release channel ch1;
9> release channel ch2;}RMAN> run{ #1-level  incremental backup(accumulative)
2> allocate channel ch1 device type disk;
3> backup as compressed backupset
4> incremental level 1 cumulative database
5> format '/u01/app/oracle/Cum_1_%d_%U'
6> tag='Cul_1';
7> release channel ch1;}RMAN> run{ #backup tablespaces
2> allocate channel ch1 device type disk;
3> backup as compressed backupset
4> tablespace EXAMPLE,USERS
5> format='/u01/app/oracle/tbs_%d_%U'
6> tag='tbs';}RMAN> run{ #backup datafile
2> allocate channel ch1 device type disk;
3> backup as compressed backupset
4> datafile 3
5> format='/u01/app/oracle/df_%d_%U'
6> tag='df';
7> release channel ch1;}RMAN> run{ #backup archived logs through SCN
2> allocate channel ch1 device type disk;
3> backup as compressed backupset
4> archivelog from scn 9214472
5> format='/u01/app/oracle/arc_%d_%U'
6> tag='arc';
7> release channel ch1;}RMAN> run{ #Image Copy backup
2> allocate channel ch1 device type disk;
3> backup as copy datafile 1,4
4> format '/u01/app/oracle/df_2_%d_%U'
5> tag 'copyback';
6> release channel ch1;}

RMAN > run { allocate channel c1 type disk; #Image Copy backup
RMAN > copy datafile 1 to '/u01/back/system.dbf';}
replace script BackupTEST1 { #scripts are stored in catalog
configure backup optimization on;
configure channel device type disk;
sql 'alter system archive log current';
backup database incremental 2 cumulative database;
release channel d1;
run {execute script BackupTEST1;}

MAN> replace script fullRestoreTEST1 { #recover
allocate channel ch1 type disk;
# Set a new location for logs
set archivelog destination to '/TD70/sandbox/TEST1/arch';
startup nomount;
restore controlfile;
alter database mount;
restore database; #file level restore. restore database/tablespace/datafile/controlfile/archivelog
recover database; #data level recover, applying redo log, and keep SCN consistent. recover database/tablespace/datafile
alter database open resetlogs;
release channel ch1;
host 'echo "start `date`"';
run {execute script fullRestoreTEST1;}
host 'echo "stop `date`"';

###oracle SQL tips is here

self defined timeout for telnet on Linux

December 26th, 2013 No comments

telnet's default timeout value is relative high, so you may want to change timeout value to lower value such as 5 seconds. Here's the way that we can fulfill this:


$command &
( sleep $waitfor ; kill -9 $commandpid > /dev/null 2>&1 ) &
wait $commandpid > /dev/null 2>&1
kill $sleeppid > /dev/null 2>&1

timeout telnet 1521 >> $output

Also, we can use expect and set timeout for expect. When telnet is integrated with expect, we can fulfill timeout for telnet through using expect's timeout value:


set timeout 30

send "<put telnet command here>\r"

Add static routes in linux which will survive reboot and network bouncing

December 24th, 2013 No comments

We can see that in linux, the file /etc/sysconfig/static-routes is revoked by /etc/init.d/network:

[root@test-linux ~]# grep static-routes /etc/init.d/network
# Add non interface-specific static-routes.
if [ -f /etc/sysconfig/static-routes ]; then
grep "^any" /etc/sysconfig/static-routes | while read ignore args ; do

So we can add rules in /etc/sysconfig/static-routes to let network routes survive reboot and network bouncing. The format of /etc/sysconfig/static-routes is like:

any net netmask gw
any net netmask gw

To make route in effect immediately, you can use route add:

route add -net netmask gw

But remember that to change the default gateway, we need modify /etc/sysconfig/network(modify GATEWAY=).

After the modification, bounce the network using service network restart to make the changes in effect.


You need make sure network id follows -net, or you'll see error "route: netmask doesn't match route address".

remove duplicate images using fdupes and expect in linux

December 13th, 2013 No comments

I've got several thousands of pictures, but most of them had several exact copies of themselves. So I had to remove duplicate ones by hand firstly.

Later, I thought of that in linux we had md5sum which will give the same string for files with exact same contents. Then I tried to write some program, and that toke me some while.

I searched google and found that in linux, we had fdupes which can do the job very well. fdupes will calculate duplicate files based on file size/md5 value, and will prompt you to reserve one copy or all copies of the duplicates and remove others if you gave -d parameter to it. You can read more about fdupes here

As all the pictures were on a windows machine, so I installed cygwin and installed fdupes and expect. Later I wrote a small script to reserve only one copy of the duplicate pictures for me(you will have to enter your option either reserving one copy or all copies by hand if you do not use expect, as there's no option for reserve one copy by the author of fdupes). Here's my program:

$ cat fdupes.expect
set timeout 1000000
spawn /home/andy/
expect "preserve files" {
send "1\r";exp_continue

$ cat /home/andy/
fdupes.exe -d /cygdrive/d/pictures #yup, my pictures are all on this directory on windows, i.e. d:\pictures

After this, you can just run fdupes.expect, and it will reserve only one copy and remove other duplicates for you.

PS: Here's man page of fdupes

Common storage multi path Path-Management Software

December 12th, 2013 No comments
Vendor Path-Management Software URL
Hewlett-Packard AutoPath, SecurePath
Microsoft MPIO
Hitachi Dynamic Link Manager
EMC PowerPath
IBM RDAC, MultiPath Driver
VERITAS Dynamic Multipathing (DMP)