US timezone map – PST MST CST EST AKST HST

January 16th, 2013

Categories: Misc Tags:

snmptrapd traphandle configuration example

January 16th, 2013

This article is going to show the basic configuration of snmptrapd and it's traphandle command.

snmptrapd is running on a linux host named "test-centos";
The host sending snmptrap messages in this example is named "test-zfs-host"

Now first we're going to set snmptrapd up on the snmptrap server side:

###Server side
[root@test-centos snmp]# cat /etc/snmp/snmptrapd.conf
#traphandle default /bin/mail -s "snmpdtrapd messages" <put your mail address here>
traphandle default /root/lognotify
authCommunity log,execute,net public

[root@test-centos snmp]# service snmptrapd restart

[root@test-centos snmp]# cat /root/lognotify
read host
read ip

while read oid val
if [ "$vars" = "" ]
vars="$oid = $val"
vars="$vars, $oid = $val"
echo trap: $host $ip $vars >/var/tmp/snmptrap.out

And to test whether snmptrapd is working as expected:

###On client side
snmptrap -v2c -c public test-centos:162 "" SNMPv2-MIB::sysDescr SNMPv2-MIB::sysDescr.0 s "test-zfs-host test-zfs-host.ip this is a test snmptrap string"

And after running this command, you can have a check of /var/tmp/snmptrap.out on the snmptrapd server side(test-centos):

[root@test-centos ~]# cat /var/tmp/snmptrap.out

If you're using sun zfs head, you can set snmptrap destinations in zfs BUI(configuration -> SNMP), here's the snapshot(click to see the larger image):

perl script for monitoring sun zfs memory usage

January 16th, 2013

On zfs's aksh, I can check memory usage with the following:

test-zfs:> status memory show
Cache 719M bytes
Unused 15.0G bytes
Mgmt 210M bytes
Other 332M bytes
Kernel 7.79G bytes

So now I want to collect this memory usae information automatically for SNMP's use. Here's the steps:

cpan> o conf prerequisites_policy follow
cpan> o conf commit

Since the host is using proxy to get on the internet, so in /etc/wgetrc:

http_proxy =
ftp_proxy =
use_proxy = on

Now install the Net::SSH::Perl perl module:

PERL_MM_USE_DEFAULT=1 perl -MCPAN -e 'install Net::SSH::Perl'

And to confirm that Net::SSH::Perl was installed, run the following command:

perl -e 'use Net::SSH::Perl' #no output is good, as it means the package was installed successfully

Now here goes the perl script to get the memory usage of sun zfs head:

[root@test-centos ~]# cat /var/tmp/mrtg/
use strict;
use warnings;
use Net::SSH::Perl;
my $host = 'test-zfs';
my $user = 'root';
my $password = 'password';

my $ssh = Net::SSH::Perl->new($host);
my ($stdout,$stderr,$exit) = $ssh->cmd("status memory show");
print "ErrorCode:$exit\n";
print "ErrorMsg:$stderr";
} else {
my @std_arr = split(/\n/, $stdout);
shift @std_arr;
foreach(@std_arr) {
if ($_ =~ /.+\b\s+(.+)M\sbytes/){
elsif($_ =~ /.+\b\s+(.+)G\sbytes/){
foreach(@std_arr) {
print $_."\n";
exit $exit;

If you get the following error messages during installation of a perl module:

[root@test-centos ~]# perl -MCPAN -e 'install SOAP::Lite'
CPAN: Storable loaded ok
CPAN: LWP::UserAgent loaded ok
Fetching with LWP:
LWP failed with code[500] message[LWP::Protocol::MyFTP: connect: Connection timed out]
Fetching with Net::FTP:

Trying with "/usr/bin/links -source" to get
ELinks: Connection timed out

Then you may have a check of whether you're using proxy to get on the internet(run cpan > o conf init to re-configure cpan; later you should set /etc/wgetrc: http_proxy, ftp_proxy, use_proxy).


resolved – yum returned Segmentation fault error on centos

January 6th, 2013

The following error messages occurred while running yum list or yum update on a centos/rhel host:

[root@test-centos ~]# yum list
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
Segmentation fault

As always, I did a strace on this:

[root@test-centos ~]# strace yum list
open("/var/cache/yum/el5_ga_base/primary.xml.gz", O_RDONLY) = 6
lseek(6, 0, SEEK_CUR) = 0
read(6, "\37\213\10\10\0\0\0\0\2\377/u01/basecamp/www/el5_"..., 8192) = 8192
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++

And here's the error messages from /var/log/messages:

[root@test-centos ~]# tail /var/log/messages
Jan 6 07:07:44 test-centos kernel: yum[5951]: segfault at 3500000000 ip 000000350cc79e0a sp 00007fff05633b78 error 4 in[350cc00000+14e000]

After some googling, I found the yum "Segmentation fault" was caused by the conflict between zlib and yum. To resolve this problem, we need use the older version of zlib. Here's the detailed steps:

[root@test-centos ~]# cd /usr/lib
[root@test-centos lib]# ls -l libz*
-rw-r--r-- 1 root root 125206 Jul 9 07:40 libz.a
lrwxrwxrwx 1 root root 22 Aug 2 07:10 -> /usr/lib/
lrwxrwxrwx 1 root root 22 Aug 2 07:10 -> /usr/lib/
-rwxr-xr-x 1 root root 75028 Jun 7 2007
-rwxr-xr-x 1 root root 99161 Jul 9 07:40

[root@test-centos lib]# rm
rm: remove symbolic link `'? y
rm: remove symbolic link `'? y
[root@test-centos lib]# ln -s
[root@test-centos lib]# ln -s

After these steps, you should now able to run yum commands without any issue.

Also, after using yum, you should change back zlib to the newer version, and here's the steps:

[root@test-centos ~]# cd /usr/lib
[root@test-centos lib]# rm
rm: remove symbolic link `'? y
rm: remove symbolic link `'? y
[root@test-centos lib]# ln -s
[root@test-centos lib]# ln -s

Categories: IT Architecture, Linux, Systems Tags:

zfs iops on nfs iscsi disk

January 5th, 2013

On zfs storage 7000 series BUI, you may found the following statistic:

This may seem quite weird as you can see that, NFSv3(3052) + iSCSI(1021) is larger than Disk(1583). As iops for protocal NFSv3/iSCSI finally goes to Disk, so why iops for the two protocals is larger than Disk iops?

Here's the reason:

Disk operations for NFSv3 and iSCSI are logical operations. These logical operations are then combined/optimized by sun zfs storage and then finally go to physical Disk operations.


1.When doing continuous access to disks(like VOD), disk throughputs will become the bottleneck of performance rather than IOPS. In constract, IOPS limits disk performance when random access is going on disks.

2.For NAS performance analytic, here are two good articles(in Chinese)

3.You may also wonder why Disk iops can be as high as 1583. As this number is the sum of all disk controllers of the zfs storage system. Here's some ballpark numbers for HDD iops:


Categories: Hardware, NAS, SAN, Storage Tags:

zfs shared lun stoage set up for oracle RAC

January 4th, 2013
  • create iSCSI Target Group

Open zfs BUI, navigate through "Configuration" -> "SAN" -> "iSCSI Targets". Then create new iSCSI Target by clicking plus sign. Give it an alias, and then select the Network interface(may be bond or LACP) you want to use(check it from "Configuration" -> "Network" and "Configuration" -> "Cluster"). After creating this iSCSI target, drag the newly created target to the right side "iSCSI Target Groups" to create one iSCSI Target Group. You can give that iSCSI target group an name too. Note down the iSCSI Target Group's iqn, this is important for later operations.(Network interfaces:use NAS interface. You can select multiple interfaces)

  • create iSCSI Initiator Group

Before going on the next step, we need first get the iSCSI initiator IQN for each hosts we want LUN allocated. On each host, execute the following command to get the iqn for iscsi on linux platform(You can edit this file before read it, for example, make iqn name ended with` hostname` so it's easier for later operations on LUN<do a /etc/init.d/iscsi restart after your modification to initiatorname.iscsi>):

[root@test-host ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=<your host's iqn name>

Now go back to zfs BUI, navigate through "Configuration" -> "SAN" -> "Initiators". On the left side, click "iSCSI Initiators", then click plus sign on it. Enter IQN you get from previos step and give it an name.(do this for each host you want iSCSI LUN allocated). After this, drag the newly created iSCSI initiator(s) from left side to form new iSCSI Initiator Groups on the right side(drag two items from the left to the same item on the right to form an group).

  • create shared LUNs for iSCSI Initiator Group

After this, we need now create LUNs for iSCSI Initiator Group(so that shared lun can be allocated, for example, oracle RAC need shared storage). Click on diskette sign on the just created iSCSI Initiator Group,select the project you want the LUN allocated from, give it a name, and assign the volume size. Select the right target group you created before(you can also create a new one e.g. RAC in shares).

PS:You can also now go to "shares" -> "Luns" and create Lun(s) using the target group you created and use default Initiator group. Note that one LUN need one iSCSI target. So you should create more iSCSI targets and add them to iSCSI target group if you want more LUNs.

  • scan shared LUNs from hosts

Now we're going to operate on linux hosts. On each host you want iSCSI LUN allocated, do the following steps:

iscsiadm -m discovery -t st -p <ip address of your zfs storage>(use cluster's ip if there's zfs cluster) #Discover available targets from a discovery portal
iscsiadm -m node -T <variable, iSCSI Target Group iqn> -p <ip address of your zfs storage> -l #Log into a specific target. Or use output from above command($1 is --portal, $3 is --targetname, -l is --login). Use -u to log out for a specified record.
service iscsi restart

After these steps, you host(s) should now see the newly allocated iSCSI LUN(s), you can run fdisk -l to confirm.


Here's more about iscsiadm:

iscsiadm -m session -P 1 #Display list of all current sessions logged in, -P for level(1 to 3). From here, we can know which target are the local disks mapped to(e.g. /dev/sde is mapped to target, then we can know the NAS ZFS appliance name. On ZFS appliance, after got the target group name, we can check which LUNs use that target group, thus we know the mapping between local iscsi disk and ZFS LUN)

iscsiadm -m session --rescan #rescan all sessions

iscsiadm -m session -r SID --rescan #rescan a specific session #SID can be got from iscsiadm -m session -P 1

iscsiadm -m node -T targetname -p ipaddress -u #Log out of a specific target

If you want to remove specific iSCSI LUN from system, then do the following:

cd /sys/class/iscsi_session/session<SID>/device/target<scsi host number>:0:0/<scsi host number>:0:0:<Lun number>
echo 1 > delete

iscsiadm -m node -T targetname -p ipaddress #Display information about a target
iscsiadm -m node -s -T targetname -p ipaddress #Display statistics about a target

iscsiadm -m discovery -o show #View iSCSI database regarding discovery
iscsiadm -m node -o show #View iSCSI database regarding targets to log into
iscsiadm -m session -o show #View iSCSI database regarding sessions logged into
multipath -ll #View if the targets are multipathed (MPIO)

You can find more information about iscsi disk in /sys/class/{scsi_device, scsi_disk, scsi_generic,scsi_host} and /sys/block/ after get the info from iscsiadm -m session -P 3.

Here's the CMD equivalent of this article(aksh of oracle ZFS appliance):

shares project rac186187
set mountpoint=/export/rac186187
set quota=1T
set readonly=true
set default_permissions=777

set default_user=root

set default_group=root
set sharenfs="sec=sys,,rw=@,root=@"
#get reservation
#get pool
#get snapdir #snapdir = visible
#get default_group #default_group = root
#get default_user #default_user = root
#get exported #exported = true

configuration net interfaces list
configuration net datalinks list

configuration san iscsi targets create rac186187
set alias=rac186187
set interfaces=aggr93001

configuration san iscsi targets list
configuration san iscsi targets groups create rac186187
set name=rac186187

[cat /etc/iscsi/initiatorname.iscsi] ->,
configuration san iscsi initiators create testhost186
set alias=testhost186

configuration san iscsi initiators create testhost187
set alias=testhost187

configuration san iscsi initiators list
configuration san iscsi initiators groups create rac186187
set name=rac186187

shares select rac186187 #project must set readonly=false
lun rac186187
set volsize=500G
set targetgroup=rac186187
set initiatorgroup=rac186187

And if you want to create one share, here's the way:

shares select <project name> #to show current properties, run "show"
filesystem <new share name>
set quota=100G
set quota_snap=false
set reservation=50G
set reservation_snap=false
set root_permissions=777
set root_user=root
set root_group=root
cd /


Suppose we know that there is one session logged in to target, then how can we add LUNs or change the size of existed LUN on ZFS?

First, to change the size of existing LUN:

[root@testhost1 ~]# iscsiadm -m session
tcp: [2],2

Log on ZFS UI, go to Configuration, SAN, iSCSI, Targets, search for "" which is the target name, then you'll find the target and target group it belongs to. Note down the target group name, e.g. RAC01, then go to Shares, LUNs, click on the LUN and change the size as needed.

To add new LUN to the host:

First, find the iscsi initiator name on this host testhost1-p:

[root@testhost1 ~]# cat /etc/iscsi/initiatorname.iscsi

Log on ZFS UI, go to Configuration, SAN, iSCSI, Initiators, search "", you'll find the initiator and initiator group. From here you can click the diskette icon and add new LUN. Make sure to select the right target group you got from previous step.

Categories: Hardware, NAS, Storage Tags: