Author Archive

use ldapmodify to change ldap userpassword

December 23rd, 2011 1 comment

Want to change your ldap password(even more?) with ldapmodify? Here goes the steps:

1. Use  ldapsearch to get a glance at the account that needs password reset:

[root@doxer ~]# ldapsearch -LLL -b 'ou=people,dc=doxer,dc=org' -x -ZZ -H 'ldap://' -w 'password' -D 'cn=Manager,dc=doxer,dc=org' uid=liandy
dn: uid=liandy,ou=people,dc=doxer,dc=org
userPassword:: e1NTSEF9WGdafd3M4RjhuSSdadfdDVmTjAwN3B6cVlacjQ0N23/12/2011fag
shadowLastChange: 15347
gidNumber: 3000
uid: liandy
cn: liandy
homeDirectory: /home/liandy
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
uidNumber: 250142da
gecos: liandy
loginShell: /bin/ksh
shadowFlag: 0

NB: -LLL here, you can refer to Here's the excerpt:

-LSearch results are display in LDAP Data Interchange Format detailed in ldif(5). A single -L restricts the output to LDIFv1. A second -L disables comments. A third -L disables printing of the LDIF version. The default is to use an extended version of LDIF.

2.Now let's use ldapmodify to change the password:

[root@doxer ~]ldapmodify -H 'ldap://' -D 'cn=Manager,dc=test,dc=com' -w 'password' -f /tmp/liandy.ldif
This is the content of /tmp/liandy.ldif
dn: uid=liandy,ou=people,dc=doxer,dc=org
changetype: modify

3.To confirm:

[root@doxer ~] ldapsearch -x -W -D 'cn=Manager,dc=test,dc=com'  -h 'ldap://' -b 'ou=people,dc=doxer,dc=org' cn=liandy modifytimestamp modifiersname

Now you can test the work by log on with new password!

NB: For more about ldapmodify, please refer to modification in one CMD etc.)

typical netbackup exclude_list usage

December 20th, 2011 No comments

bash-2.03$ cat /usr/openv/netbackup/exclude_list


Take *.[Aa][Rr][Cc] for example. it will exclude all the file suffix with .arc or .Arc ....


  1. Include lists can only include files and directories that are listed in the exclude lists so that they are not excluded.
  2. There's also a thread about include_list & exclude_list here

Linux hostname domainname dnsdomainname nisdomainname ypdomainname

December 20th, 2011 No comments

Here's just an excerpt from online man page of "domainname":

hostname - show or set the system's host name
domainname - show or set the system's NIS/YP domain name
dnsdomainname - show the system's DNS domain name
nisdomainname - show or set system's NIS/YP domain name
ypdomainname - show or set the system's NIS/YP domain name
hostname will print the name of the system as returned by the gethost-
name(2) function.

domainname, nisdomainname, ypdomainname will print the name of the sys-
tem as returned by the getdomainname(2) function. This is also known as
the YP/NIS domain name of the system.

dnsdomainname will print the domain part of the FQDN (Fully Qualified
Domain Name). The complete FQDN of the system is returned with hostname

Sometime you may find a weird thing that you can use ldap verification to log on a client, but you can not sudo to root. Now you should consider run domainname to check whether it's set to (none). If it does, you should consider set the domainname just using domainname command.

perl modules install and uninstall and list – using cpan

November 24th, 2011 No comments

If you want to install perl module SOAP::Lite using cpan for example, here's the command line:

perl -MCPAN -e 'install SOAP::Lite'

To test whether the module has been installed or not, run this:

perl -MSOAP::Lite -e "print \"Module installed.\\n\";"

Also, what about uninstallation?

Under windows:
if you have the activestate distro, go to shell/command prompt, enter PPM. When PPM opens type
"remove [package name]" this will uninstall that particular package for you. type "help remove" in PPM for more details.
Alternativly if you know what files came with the package, just delete them. Be aware that some modules will rely on other modules been installed so make sure there are no dependencies before you remove delete packages.

Under Linux:
depending on the module. (the path might be different on your system, check: perl -e 'print join("\n", @INC);' )
Also see:

Generally there is little reason to remove a module, probably why CPAN doesn't provide this function.

Oracle views – Static Data Dictionary Views & Dynamic Performance Views

November 5th, 2011 1 comment

Views are customized presentations of data in one or more tables or other views. You can think of them as stored queries. Views do not actually contain data, but instead derive their data from the tables upon which they are based. These tables are referred to as the base tables of the view.

Similar to tables, views can be queried, updated, inserted into, and deleted from, with some restrictions. All operations performed on a view actually affect the base tables of the view. Views can provide an additional level of security by restricting access to a predetermined set of rows and columns of a table. They can also hide data complexity and store complex queries.

Many important views are in the SYS schema. There are two types: static data dictionary views and dynamic performance views. Complete descriptions of the views in theSYS schema are in Oracle Database Reference.

Static Data Dictionary Views

The data dictionary views are called static views because they change infrequently, only when a change is made to the data dictionary. Examples of data dictionary changes include creating a new table or granting a privilege to a user.

Many data dictionary tables have three corresponding views:

  • DBA_ view displays all relevant information in the entire database. DBA_ views are intended only for administrators.An example of a DBA_ view is DBA_TABLESPACES, which contains one row for each tablespace in the database.
  • An ALL_ view displays all the information accessible to the current user, including information from the schema of the current user, and information from objects in other schemas, if the current user has access to those objects through privileges or roles.An example of an ALL_ view is ALL_TABLES, which contains one row for every table for which the user has object privileges.
  • USER_ view displays all the information from the schema of the current user. No special privileges are required to query these views.An example of a USER_ view is USER_TABLES, which contains one row for every table owned by the user.

The columns in the DBA_ALL_, and USER_ views are usually nearly identical.

Dynamic Performance Views

Dynamic performance views monitor ongoing database activity. They are available only to administrators. The names of dynamic performance views start with the characters V$. For this reason, these views are often referred to as V$ views.

An example of a V$ view is V$SGA, which returns the current sizes of various System Global Area (SGA) memory components.

Partitioned Tables and Indexes & Compressed Tables of oracle database

November 5th, 2011 No comments
1.Partitioned Tables and Indexes

You can partition tables and indexes. Partitioning helps to support very large tables and indexes by enabling you to divide the tables and indexes into smaller and more manageable pieces called partitions. SQL queries and DML statements do not have to be modified to access partitioned tables and indexes. Partitioning is transparent to the application.

After partitions are defined, certain operations become more efficient. For example, for some queries, the database can generate query results by accessing only a subset of partitions, rather than the entire table. This technique (called partition pruning) can provide order-of-magnitude gains in improved performance. In addition, data management operations can take place at the partition level, rather than on the entire table. This results in reduced times for operations such as data loads; index creation and rebuilding; and backup and recovery.

Each partition can be stored in its own tablespace, independent of other partitions. Because different tablespaces can be on different disks, this provides a table structure that can be better tuned for availability and performance. Storing partitions in different tablespaces on separate disks can also optimize available storage usage, because frequently accessed data can be placed on high-performance disks, and infrequently retrieved data can be placed on less expensive storage.

Partitioning is useful for many types of applications that manage large volumes of data. Online transaction processing (OLTP) systems often benefit from improvements in manageability and availability, while data warehousing systems benefit from increased performance and manageability.

As with tables, you can partition an index. In most situations, it is useful to partition an index when the associated table is partitioned, and to partition the index using the same partitioning scheme as the table. (For example, if the table is range-partitioned by sales date, then you create an index on sales date and partition the index using the same ranges as the table partitions.) This is known as a local partitioned index. However, you do not have to partition an index using the same partitioning scheme as its table. You can also create a nonpartitioned, or global, index on a partitioned table.

2.Compressed Tables

Table Compression is suitable for both OLTP applications and data warehousing applications. Compressed tables require less disk storage and result in improved query performance due to reduced I/O and buffer cache requirements. Compression is transparent to applications and incurs minimal overhead during bulk loading or regular DML operations such as INSERT, UPDATE or DELETE.

Extending tmpfs’ed /tmp on Solaris 10(and linux) without reboot

November 3rd, 2011 No comments

Thanks to Eugene.

If you need to extend /tmp that is using tmpfs on Solaris 10 global zone (works with zones too but needs adjustments) and don't want to undertake a reboot, here's a tried working solution.


echo "$(echo $(echo ::fsinfo | mdb -k | grep /tmp | head -1 | awk '{print $1}')::print vfs_t vfs_data \| ::print -ta struct tmount tm_anonmax | mdb -k | awk '{print $1}')/Z 0x20000" | mdb -kw

Note the 0x20000. This number means new size will be 1GB. It is calculated like this: as an example, 0x10000 in hex is 65535, or 64k. The size is set in pages, each page is 8k, so resulting allocation size is 64k * 8k = 512m. 0x20000 is 1GB, 0x40000 is 2GB etc.

If the server has zones, you will see more then one entry in ::fsinfo, and you need to feed exact struct address to mdb. This way you can change /tmp size for individual zones, but this can only be done from global zone.

Same approach can probably be applied to older Solaris releases but will definitely need adjustments. Oh, and in case you care, on Linux it's as simple as "mount -o remount,size=1G /tmp" :)


You may try "mount -t tmpfs -o size=7g shmfs /dev/shm" or "mount -t tmpfs -o size=7g tmpfs /dev/shm" on Linux platform.

Categories: IT Architecture, Kernel, Systems, Unix Tags:

why BST/DST(British Summer Time, Daylight Saving Time) achives the goal of saving energy

October 31st, 2011 No comments

Greenwich Mean Time

Tonight(30th October 2011), we'll welcome the GMT(Greenwich Mean Time) and hug away BST(British Summer Time, Daylight Saving Time) at 01:59:59 30th October 2011.

Why BST/DST(British Summer Time, Daylight Saving Time) achives the goal of saving energy? As you may think I still am going to work 8 hours a day and 365 days a year(may be too much? :D). OK, here's what I think.

As we all can experience, the sun arise earlier in Summer than in Autumn. So when you wake up in the morning, you'll do not need turn on the light, that does save the energy. And another extreme example, one man sleeps all the day and play games all the night, and another man sleeps all the night and work all the day, which one do you think is more environmentally friendly? Actually, this was why Benjamin Franklin, the man on $100, suggest DST(Daylight Saving Time).

Benjamin Franklin

Benjamin Franklin

Categories: Misc Tags:

List of build automation software

October 29th, 2011 No comments

Things like make/ant/hudson etc. See this url for details:

Categories: IT Architecture Tags:

vcs architecture overview

October 28th, 2011 No comments
vcs architecture overview

vcs architecture overview

Possible issue with latest version (6.8) of Oracle Explorer(vulnerability)

October 24th, 2011 No comments

I have been made aware of a potential issue with the latest version of explorer (6.8). The issue is caused because Explorer is calling a command (pcitool ) and that can cause a system to crash.
Check the version currently on your OS:
# cat /etc/opt/SUNWexplo/default/explorer|grep EXP_DEF_VERSION


Alternatively, if you want get it in a script:

VER=$(nawk  -F= '/EXP_DEF_VERSION/ {print $NF}' /etc/opt/SUNWexplo/default/explorer |sed 's/\"//g')

There are a number of workarounds...
1) downgrade explorer to 6.7

2) comment the line 674 ('get_cmd "/usr/sbin/pcitool -v" sysconfig/pcitool-v') by adding '#' to the beginning of it

3) make /usr/sbin/pcitool not executable on these systems (either remove it or change permissions)


For solaris explorer related concepts/download etc, please refer to

relationship between dmx srdf bcv

October 24th, 2011 No comments

The R1 BCV is related to the R1 DEV the R1 is srdf'd to the R2. The R2 is related to the R2 BCV. There is no direct relationship between the R1dev and the R2 bcv.
The symdg's as you know are contained in a local host based config database which group together devs you want to run commands on at the same time so you can fail them over a s a group or sync/split the bcv's as a group without having to list all the devs.





Because you only talk to the local array you specify your DEV's in your symdg as the ones that are local to you via symld. When symdgs are first created they are type R1 or R2 depending on if you're going to add R1 or R2 devices. This will automatically update when you fail over. The devices you add to your symdg are always the ones that are on your local array. The rdf relationship does not need to be specified in the symdg it is inherent to the device as it is a 1-2-1 relationship so when you run a symrdf query it will find out the paired device from the array.
BCV's are different because you can have more than one BCV attached to a device.
The –rdf in the symbcv command says bcv device <DEV> is remote i.e. is a bcv device on the remote array.
When you're running your commands then you have to picture yourself (well you don't but I do) standing locally on the host your running the commands on and think which devs appear local from there and which ones are remote.

Categories: Hardware, Storage Tags:

Three types of 301 rewrite/redirect for apache httpd server

October 4th, 2011 No comments

Type 1:Exact URL without any query parameters:
Type 2:Exact URL with any query parameters
Type 3:Exact URL plus query parameters and/or sub pages / EVERYTHING AFTER THIS INCLUDED

For Type 1:
RewriteRule ^/portal/site/doxerorg/mydoxer/anyone$ [L,R=301,NC]
For Type 2:
RewriteCond %{QUERY_STRING} ^(.*)$
RewriteRule ^/portal/site/doxerorg/mydoxer/mydoxer/anyone/register$ [R=301,L,NC]
For Type 3:
RewriteCond %{QUERY_STRING} ^(.*)$
RewriteRule ^/portal/site/doxerorg/mydoxer/article [R=301,L,NC]

1.In the destination URL, add '?' to the end if you don't want the query string to be auto-appended to the destination URL
2.[R] flag specifies a redirect, instead of the usual rewrite, [L] makes this the last rewrite rule to apply, [NC] means case insensitive

Categories: IT Architecture Tags:

Error extras/ata_id/ata_id.c:42:23: fatal error: linux/bsg.h: No such file or directory when compile LFS udev-166

September 21st, 2011 1 comment

For "Linux From Scratch - Version 6.8", on part "6.60. Udev-166", after configuration on udev, we need compile the package using make, but later, I met the error message like this:

"extras/ata_id/ata_id.c:42:23: fatal error: linux/bsg.h: No such file or directory"

After checking line 42 of ata_id.c under extras/ata_id of udev-166's source file, I can see that:

"#include <linux/bsg.h>"

As there's no bsg.h under $LFS/usr/include/linux, so I was sure that this error was caused by C header file loss. Checking with:
root:/sources/udev-166# /lib/
I can see that Glibc was 2.13, and GCC was 4.5.2:

GNU C Library stable release version 2.13, by Roland McGrath et al.
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
Compiled by GNU CC version 4.5.2.
Compiled on a Linux 2.6.22 system on 2011-09-04.
Available extensions:
crypt add-on version 2.1 by Michael Glad and others
GNU Libidn by Simon Josefsson
Native POSIX Threads Library by Ulrich Drepper et al
For bug reporting instructions, please see:

After some searching work on google, I can conclude that this was caused by GCC version. I was not going to rebuilt gcc for this, so I tried get this header file and put it under $LFS/usr/include/linux/bsg.h. You can go to to download the header file.

After copy & paster & chmod, I ran make again, and it succeeded.

Categories: IT Architecture, Kernel, Linux, Systems Tags:

Relationship between san’s HA DA FA RA WWN ZONE

September 15th, 2011 2 comments

Here's the image(Thanks to Peter G):


EMC Symmetrix connectivity to a host in a switched fabric logical and physical architecture

Here's another image of SAN architecture of EMC symmetrix arrays:







Or you may want to download the doc one:
Logicalandphysicalsanconnectivity.doc (1)

Categories: Hardware, Storage Tags:

Want your ldap password never expired? Here goes the howto

September 6th, 2011 2 comments

First, let's check when your password will expire using ldapsearch:
root on testserver:/tmp # ldapsearch -D cn="Directory Manager" -h -b ou=people,dc=testserver,dc=com uid=liandy passwordexpirationtime
Enter bind password:
version: 1
dn: uid=liandy,ou=people,dc=testserver,dc=com
passwordexpirationtime: 20111005230540Z

ldapsearch -D "cn='Directory Manager'" -h -b ou=people,dc=testserver,dc=com uid=liandy passwordexpirationtime #this should work also
Now, let's create a file named passwd.dn with content:
And the last step is to change the expiration time using ldapmodify:
root on testserver:/tmp # ldapmodify -D cn="Directory Manager" -h -f passwd.dn
Enter bind password:
modifying entry uid=liandy,ou=people,dc=testserver,dc=com
That's all the steps you need to change ldap password expiration time. To verify this has taken effect, fire ldapsearch to show expiration time just as the first step:
root on testserver:/tmp # ldapsearch -D cn="Directory Manager" -h -b ou=people,dc=testserver,dc=com uid=liandy passwordexpirationtime
Enter bind password:
version: 1
dn: uid=liandy,ou=people,dc=testserver,dc=com
passwordexpirationtime: 20120612135450Z
So you can see the ldap expiration time has been extended to 20120612.
If you want to change password for liandy on, do the following:
Create a file named passwd2.dn with content:
In passwd2.dn:

Then run ldapmodify to modify the password:
ldapmodify -D cn="Directory Manager" -h -f passwd2.dn

If you want to modify directory manager's password, here goes the step:
Create a file passwd3.dn with content:
dn: cn=config
changetype: modify
replace: nsslapd-rootpw
nsslapd-rootpw: EnterYourPassword

Then run ldapmodify to change the password:
ldapmodify -D "cn=directory manager" -h -f passwd3.dn

If you forget the password for directory manager, you then need firstly find dse.ldap under ldap/slapd-Portal1/config, then encrypt your password, then modify nsslapd-rootpw.

Categories: IT Architecture, Linux, Systems, Unix Tags:

hp openview monitoring solution – enable disable policies

August 18th, 2011 2 comments

This document is part of the “ESM – How To” Guides and gives instructions on how to Disable and Enable monitoring policies on Servers managed by Openview for Unix (OVOU), Openview for Windows (OVOW) and opsview for blackout or maintenance purposes.

Monitoring disabling / enabling process may differ depending what solution is used for monitoring the servers involved in the DR test.

This document list procedures to disable and enable monitoring in different monitoring solutions.

1.Monitoring changes to servers monitored in OVOU(Unix/Linux)
1.1. Disabling monitoring templates
Disable monitoring templates on the monitored server.
Logon to server and execute following command as root user
/opt/OV/bin/opctemplate –d -all

Above command will disable all monitoring policies on the server.

Now disable openview agent heartbeat polling from Openview servers(Server Side):
Execute following commands to disable OVO agent heartbeat polling
/opt/OV/bin/OpC/opchbp –stop <name of server>

1.2. Enabling monitoring templates
Enable monitoring templates on the monitored server.
Logon to server and execute following command as root user
/opt/OV/bin/opctemplate –e -all

Above command will enable all monitoring policies on the server.

Now enable openview agent heartbeat polling from Openview servers(Server Side):
Execute following commands to enable OVO agent heartbeat polling
/opt/OV/bin/OpC/opchbp –start <name of server>

2.Disabling monitoring for servers monitored in OVOW(Windows)
Firstly, access the Web Console of OV:


2.1 Disabling policy(s)
Select the Policies Icon on the left hand panel and then expand the Nodes by selecting the select the plus next to the nodes in the center panel. Once you have located the server that you want to disable policies on select it and the policy inventory is shown in the right hand panel. The policies to be disabled are selected using the check boxes next to the policy name, or to disable all the policies select the check box next to the name, and this will select all the policies. Then select the Disable tab and the policies will be disabled, the display will show Active and pending until all the policies have been disabled, use the Refresh tab to update the screen.

2.2 Enabling policy(s)
Please refer to 2.1 Disabling policy(s)

3.Monitoring changes for servers monitored in Opsview(Nagios)
Log on opsview through web, search and locate the host you want operate on, Click on the grey arrow on the right side of the node and choose “Schedule downtime”. Now you can choose the start and End time for the downtime for this server.

obu firmware patching for sun T2000/T5120/T5220 servers OBP patching

August 17th, 2011 No comments

Don't know why there's some formatting errors with this article, you can download the PDF version here: obu firmware patching for sun T2000-T5120-T5220 servers OBP patching

You can download Sun_System_Firmware-6_7_11-Sun_Fire_T2000 from or here(For SunFireT2000)

Categories: Hardware, Servers Tags:

hpux san storage configure howto steps for veritas vxvm

August 15th, 2011 No comments

1/             Run  ioscan -fnkC disk  & dump to a file

2/            run ioscan no options and dump to a file

3/            run ioscan -fnkC disk  and check for disk

devices that are not claimed.

4/            run  insf -ev to allocated the new entries to /dev/dsk/xxxxx

5/            vxdisksetup  / vxdiskadm to setup disks

6/           vxdctl enable

7/            vxvm - add disks to dg etc

Categories: Hardware, Storage Tags:

Emcgrab download url

July 27th, 2011 1 comment

Just go to to select yours and download(solaris, linux, aix etc.). The latest version is emcgrab-4.4.4.


emcgrab is also known as Remediation HEAT report. EMC grab report generates file that EMC or storage team might use for diagnostics and monitoring of storage usage by the client and possible issues; it also dumps logs/config into this file. This procedure proved to be safe, but need to be done out of hours if that`s possible.

correctable ecc event detected by cpu0/1/3/4

July 20th, 2011 No comments

If you receive these kinds of alerts in Solaris, it means your server has Memory Dimm issue. Please check with hrdconf/madmin(HP-UX) or prtconf(SUN solaris) to see the error message.

For more information about ECC memory, you can refer to the following article:

convert squid access.log time to human-readable format

July 13th, 2011 No comments

If you're looking at squid access.log, you'll find the first column of it is the time, for example:
1310313302.640 829 TCP_MISS/200 104 CONNECT - DIRECT/ -
1310313303.484 1845 TCP_MISS/200 104 CONNECT - DIRECT/ -
This is not very human readable. Now we can use perl to convert squid access.log time to human-readable format, just one line command:

cat /var/log/squid/access.log |perl -nwe's/^(\d+)/localtime($1)/e; print'

After this, it'll look like this:
Sun Jul 10 09:55:03 2011.484 1845 TCP_MISS/200 104 CONNECT - DIRECT/ -
Sun Jul 10 09:55:07 2011.146 355 TCP_MISS/200 1176 GET - DIRECT/ text/html
Hey, more human-readable, right?

Resolved – httpd: fatal: open failed: No such file or directory

July 9th, 2011 No comments

When I tried to start up IBM http server on my Solaris, I met this error message: httpd: fatal: open failed: No such file or directory
This error was caused by library can not be found in the current library path environment variable. We should add /apps/IBMHTTPD/Installs/IHS61-01/lib to LD_LIBRARY_PATH to make be seen by ld.
Here goes the resolution:
#export LD_LIBRARY_PATH=/apps/IBMHTTPD/Installs/IHS61-01/lib:$LD_LIBRARY_PATH #set environment variable
#/apps/IBMHTTPD/Installs/IHS61-01/bin/httpd -d /apps/IBMHTTPD/Installs/IHS61-01 -d /apps/IBMHTTPD/Installs/IHS61-01 -f /apps/IBMHTTPD/Installs/IHS61-01/conf/ihs.conf -k restart #restart IHS
#/usr/ucb/ps auxww|grep -i ibmhttpd #check the result


Here's more about (from book <Optimizing Linux Performance: A Hands-On Guide to Linux Performance Tools>)

When a dynamically linked application is executed, the Linux loader,, runs loads all the application’s libraries and connects symbols that the application uses with the functions the libraries provide. Because different libraries were originally linked at different and possibly overlapping places in memory, the linker needs to sort through all the symbols and make sure that each lives at a different place in memory. When a symbol is moved from one virtual address to another, this is called a relocation. It takes time for the loader to do this, and it is much better if it does not need to be done at all. The prelink application aims to do that by rearranging the system libraries of the entire systems so that they do not overlap. An application with a high number of relocations may not have been prelinked.

The Linux loader usually runs without any intervention from the user, and by just executing a dynamic program, it is run automatically. Although the execution of the loader is hidden from the user, it still takes time to run and can potentially slow down an application’s startup time. When you ask for loader statistics, the loader shows the amount of work it is doing and enables you to figure out whether it is a bottleneck.

The ld command is invisibly run for every Linux application that uses shared libraries. By setting the appropriate environment variables, we can ask it to dump information about its execution. The following invocation influences ld execution:
env LD_DEBUG=statistics,help LD_DEBUG_OUTPUT=filename <command>

How to check BerkeleyDB version info

June 22nd, 2011 No comments

How to check BerkeleyDB version info? Just run this command:

#grep DB_VERSION_STRING /usr/include/db4/db.h

#define DB_VERSION_STRING       "Sleepycat Software: Berkeley DB 4.3.29: (September 19, 2009)"

Then your BerkeleyDB version is 4.3.29.

If there's no /usr/include/db4 directory, you may check to see whether there's /usr/include/db3 or even /usr/include/db2 directory in your server. And run the respective command with db4 substituted by db3 or db2.

mkfile/dd to create a file with specific size

June 21st, 2011 3 comments

Now assume you want to create a file with 10M space:
Under Solaris:
root@beisoltest02 / # mkfile 10m disk1.img
root@beisoltest02 / # ls -lh disk1.img -rw------T   1 root     root         10M Jun 22 00:41 disk1.img

Under Linux:
[root@beivcs02 downloads]#  dd if=/dev/zero of=disk1.img bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.01096 seconds, 957 MB/s
[root@beivcs02 downloads]# ls -lh disk1.img
-rw-r--r-- 1 root root 10M Jun 22 00:37 disk1.img


There are many other options you can use to control dd behavior, such as cache. You can refer to here and here.

Categories: IT Architecture, Linux, Systems Tags:

Nagios Check_nrpe/check_disk with error message DISK CRITICAL – /apps/yourxxx is not accessible: Permission denied

June 21st, 2011 1 comment

This was sometimes because the user under which nagios runs by had no read permission to the file systems check_disk is going to check.

For example, if you received alert:

DISK CRITICAL - /apps/vcc/logs/way is not accessible: Permission denied

You then can log on your server, under root, run:

/usr/local/nagios/libexec/check_disk -p /apps/vcc/logs/way, you may see:

DISK OK - free space: /apps/vcc/logs/way 823 MB (21% inode=90%);| /=2938MB;;3966;0;3966

But when you run this under user nagios, you may see DISK CRITICAL again.


Grant read permission to the filesystem/directory that had problem.


DNS flush on linux/windows/mac

June 17th, 2011 No comments
  • Mac:
  • sudo dscacheutil -flushcache
  • Linux:
  • Here're ways to flush dns cache on linux/windows/mac:
    On linux:
    rndc flush
    /etc/init.d/nscd restart
    On Windows:
    ipconfig /flushdns

    On Mac:

    sudo dscacheutil -flushcache

    You can install DNS cache plug-in to automatically flush DNS cache for you if you have Firefox installed.


    Categories: IT Architecture, Linux, Systems Tags:

    How to resize/shrink lvm home/root partition/filesystem dynamically

    June 8th, 2011 2 comments

    When you're trying to extend root file system under lvm environment, here are the steps:

    1.Extend the root volume:

    #lvextend -L500G /dev/VolGroup00/apps-yourapp

    2.Grow file system:

    #resize2fs /dev/VolGroup00/apps-yourapp 500G

    Shrinking can’t be done that easily and requires an umount. To shrink the FS, do the following:

    umount <logical-volume-device>
    e2fsck -f <logical-volume-device>
    resize2fs <logical-volume-device> <new-size-minus-50MB>
    lvreduce -L <new-size> <logical-volume-device>

    This procedure is for normal(not root) file systems. But what should we do when we want to shrink root/home partition/file system?

    You need go to linux rescue mode using a bootable device(Such as the first cd of your linux distro), type linux rescue, and when there jumps out an alert says whether to mount system to /mnt/sysimage, select skip.

    When you're in the shell of rescue mode, do the following steps:

    1.lvm vgchange -a y

    2.e2fsck -f /dev/VolGroup00/LogVol00

    3.resize2fs -f /dev/VolGroup00/LogVol00 500G

    4.lvm lvreduce -L500G /dev/VolGroup00/LogVol00

    Then reboot your system, and after it starts up, running df -h to check the result. And using vgs to see VSize and VFree.




    Interesting tty/pty/pts experiment

    June 8th, 2011 No comments

    Now, open two tabs on the same server using xshell, on each tab(session), run tty:

    root@testserver# tty


    root@testserver# tty


    First experiment - echo an string between ttys

    From /dev/pts/7, run:

    root@testserver# echo hi>/dev/pts/25

    Then, from the other tab(session), you're sure to see:

    root@testserve# hi

    To be continued...

    Grow/shrink vxfs file system(or volume) dynamically using vxresize

    June 5th, 2011 No comments

    I've already use vxassist creating disk group, making file system,  and mounting it to the OS and that partition is now in use. How can I grou/shrink  the size of vxvm filesystem dynamically ?

    Three steps:

    1.Which disk group does the file system use?

    In my scenario, I've /user which is the mount point of volume user_vol, and that volume belongs to andy_li disk group. As I created them, I've a clear mind about it. Then, how can I know which disk group does a file system/volume belongs to?

    #df -h /user

    /dev/vx/dsk/andy_li/user_vol                      1.0G   18M  944M   2% /user

    Now you can see, /user file system belongs to andy_li disk group.

    2. Now let's check how many space left in the disk group that we can use for growing /user:

    #vxdg -g andy_li free

    GROUP        DISK         DEVICE       TAG          OFFSET    LENGTH    FLAGS

    andy_li      andy_li01    sdb          sdb          3121152   999168 -

    999168 blocks, that's about 500MB.

    3. The last and most important thing is to grow the file system:

    /etc/vx/bin/vxresize -b -g andy_li user_vol +999168 alloc=andy_li01

    Ok, after this operation, let's check the file system's size again:

    #df -h /user/

    Filesystem            Size  Used Avail Use% Mounted on

    /dev/vx/dsk/andy_li/user_vol                      1.5G 18M  1.4G   2% /user

    That's all. And vice versa, You can use minus(-) instead of plus(+) to shrink the file system.

    Categories: Hardware, Storage Tags: