Archive

Author Archive

Partitioned Tables and Indexes & Compressed Tables of oracle database

November 5th, 2011 No comments
1.Partitioned Tables and Indexes

You can partition tables and indexes. Partitioning helps to support very large tables and indexes by enabling you to divide the tables and indexes into smaller and more manageable pieces called partitions. SQL queries and DML statements do not have to be modified to access partitioned tables and indexes. Partitioning is transparent to the application.

After partitions are defined, certain operations become more efficient. For example, for some queries, the database can generate query results by accessing only a subset of partitions, rather than the entire table. This technique (called partition pruning) can provide order-of-magnitude gains in improved performance. In addition, data management operations can take place at the partition level, rather than on the entire table. This results in reduced times for operations such as data loads; index creation and rebuilding; and backup and recovery.

Each partition can be stored in its own tablespace, independent of other partitions. Because different tablespaces can be on different disks, this provides a table structure that can be better tuned for availability and performance. Storing partitions in different tablespaces on separate disks can also optimize available storage usage, because frequently accessed data can be placed on high-performance disks, and infrequently retrieved data can be placed on less expensive storage.

Partitioning is useful for many types of applications that manage large volumes of data. Online transaction processing (OLTP) systems often benefit from improvements in manageability and availability, while data warehousing systems benefit from increased performance and manageability.

As with tables, you can partition an index. In most situations, it is useful to partition an index when the associated table is partitioned, and to partition the index using the same partitioning scheme as the table. (For example, if the table is range-partitioned by sales date, then you create an index on sales date and partition the index using the same ranges as the table partitions.) This is known as a local partitioned index. However, you do not have to partition an index using the same partitioning scheme as its table. You can also create a nonpartitioned, or global, index on a partitioned table.

2.Compressed Tables

Table Compression is suitable for both OLTP applications and data warehousing applications. Compressed tables require less disk storage and result in improved query performance due to reduced I/O and buffer cache requirements. Compression is transparent to applications and incurs minimal overhead during bulk loading or regular DML operations such as INSERT, UPDATE or DELETE.

Extending tmpfs’ed /tmp on Solaris 10(and linux) without reboot

November 3rd, 2011 No comments

Thanks to Eugene.

If you need to extend /tmp that is using tmpfs on Solaris 10 global zone (works with zones too but needs adjustments) and don't want to undertake a reboot, here's a tried working solution.

PLEASE BE CAREFUL, ONE ERROR HERE WILL KILL THE LIVE KERNEL!

echo "$(echo $(echo ::fsinfo | mdb -k | grep /tmp | head -1 | awk '{print $1}')::print vfs_t vfs_data \| ::print -ta struct tmount tm_anonmax | mdb -k | awk '{print $1}')/Z 0x20000" | mdb -kw

Note the 0x20000. This number means new size will be 1GB. It is calculated like this: as an example, 0x10000 in hex is 65535, or 64k. The size is set in pages, each page is 8k, so resulting allocation size is 64k * 8k = 512m. 0x20000 is 1GB, 0x40000 is 2GB etc.

If the server has zones, you will see more then one entry in ::fsinfo, and you need to feed exact struct address to mdb. This way you can change /tmp size for individual zones, but this can only be done from global zone.

Same approach can probably be applied to older Solaris releases but will definitely need adjustments. Oh, and in case you care, on Linux it's as simple as "mount -o remount,size=1G /tmp" :)

 PS:

You may try "mount -t tmpfs -o size=7g shmfs /dev/shm" or "mount -t tmpfs -o size=7g tmpfs /dev/shm" on Linux platform.

Categories: IT Architecture, Kernel, Systems, Unix Tags:

why BST/DST(British Summer Time, Daylight Saving Time) achives the goal of saving energy

October 31st, 2011 No comments

Greenwich Mean Time

Tonight(30th October 2011), we'll welcome the GMT(Greenwich Mean Time) and hug away BST(British Summer Time, Daylight Saving Time) at 01:59:59 30th October 2011.

Why BST/DST(British Summer Time, Daylight Saving Time) achives the goal of saving energy? As you may think I still am going to work 8 hours a day and 365 days a year(may be too much? :D). OK, here's what I think.

As we all can experience, the sun arise earlier in Summer than in Autumn. So when you wake up in the morning, you'll do not need turn on the light, that does save the energy. And another extreme example, one man sleeps all the day and play games all the night, and another man sleeps all the night and work all the day, which one do you think is more environmentally friendly? Actually, this was why Benjamin Franklin, the man on $100, suggest DST(Daylight Saving Time).

Benjamin Franklin

Benjamin Franklin

Categories: Misc Tags:

List of build automation software

October 29th, 2011 No comments

Things like make/ant/hudson etc. See this url for details: http://en.wikipedia.org/wiki/List_of_build_automation_software

Categories: IT Architecture Tags:

vcs architecture overview

October 28th, 2011 No comments
vcs architecture overview

vcs architecture overview

Possible issue with latest version (6.8) of Oracle Explorer(vulnerability)

October 24th, 2011 No comments

I have been made aware of a potential issue with the latest version of explorer (6.8). The issue is caused because Explorer is calling a command (pcitool ) and that can cause a system to crash.
Check the version currently on your OS:
# cat /etc/opt/SUNWexplo/default/explorer|grep EXP_DEF_VERSION

EXP_DEF_VERSION="6.8"

Alternatively, if you want get it in a script:

VER=$(nawk  -F= '/EXP_DEF_VERSION/ {print $NF}' /etc/opt/SUNWexplo/default/explorer |sed 's/\"//g')

There are a number of workarounds...
-----------------------------------------------------------------
1) downgrade explorer to 6.7

2) comment the line 674 ('get_cmd "/usr/sbin/pcitool -v" sysconfig/pcitool-v') by adding '#' to the beginning of it

3) make /usr/sbin/pcitool not executable on these systems (either remove it or change permissions)

PS:

For solaris explorer related concepts/download etc, please refer to http://blogs.oracle.com/PJ/entry/solaris_10_explorer_data_collection

relationship between dmx srdf bcv

October 24th, 2011 No comments

The R1 BCV is related to the R1 DEV the R1 is srdf'd to the R2. The R2 is related to the R2 BCV. There is no direct relationship between the R1dev and the R2 bcv.
The symdg's as you know are contained in a local host based config database which group together devs you want to run commands on at the same time so you can fail them over a s a group or sync/split the bcv's as a group without having to list all the devs.

DMX_SRDF_SAN_VCS

DMX_SRDF_SAN_VCS

R1BCV_R1DEV_R2BCV_R2DEV

R1BCV_R1DEV_R2BCV_R2DEV

Because you only talk to the local array you specify your DEV's in your symdg as the ones that are local to you via symld. When symdgs are first created they are type R1 or R2 depending on if you're going to add R1 or R2 devices. This will automatically update when you fail over. The devices you add to your symdg are always the ones that are on your local array. The rdf relationship does not need to be specified in the symdg it is inherent to the device as it is a 1-2-1 relationship so when you run a symrdf query it will find out the paired device from the array.
BCV's are different because you can have more than one BCV attached to a device.
The –rdf in the symbcv command says bcv device <DEV> is remote i.e. is a bcv device on the remote array.
When you're running your commands then you have to picture yourself (well you don't but I do) standing locally on the host your running the commands on and think which devs appear local from there and which ones are remote.

Categories: Hardware, Storage Tags:

Three types of 301 rewrite/redirect for apache httpd server

October 4th, 2011 No comments

Type 1:Exact URL without any query parameters:
www.doxer.org/portal/site/doxerorg/mydoxer/anyone
Type 2:Exact URL with any query parameters
www.doxer.org/portal/site/doxerorg/mydoxer/anyone?n=
Type 3:Exact URL plus query parameters and/or sub pages
www.doxer.org/portal/site/doxerorg/mydoxer/anyone / EVERYTHING AFTER THIS INCLUDED

For Type 1:
RewriteRule ^/portal/site/doxerorg/mydoxer/anyone$ http://www.doxer.org/shop/pc/anyone? [L,R=301,NC]
For Type 2:
RewriteCond %{QUERY_STRING} ^(.*)$
RewriteRule ^/portal/site/doxerorg/mydoxer/mydoxer/anyone/register$ http://www.doxer.org/shop/pc/anyone? [R=301,L,NC]
For Type 3:
RewriteCond %{QUERY_STRING} ^(.*)$
RewriteRule ^/portal/site/doxerorg/mydoxer/article http://www.doxer.org/mydoxer/latestnews? [R=301,L,NC]

Note:
1.In the destination URL, add '?' to the end if you don't want the query string to be auto-appended to the destination URL
2.[R] flag specifies a redirect, instead of the usual rewrite, [L] makes this the last rewrite rule to apply, [NC] means case insensitive

Categories: IT Architecture Tags:

Error extras/ata_id/ata_id.c:42:23: fatal error: linux/bsg.h: No such file or directory when compile LFS udev-166

September 21st, 2011 1 comment

For "Linux From Scratch - Version 6.8", on part "6.60. Udev-166", after configuration on udev, we need compile the package using make, but later, I met the error message like this:

"extras/ata_id/ata_id.c:42:23: fatal error: linux/bsg.h: No such file or directory"

After checking line 42 of ata_id.c under extras/ata_id of udev-166's source file, I can see that:

"#include <linux/bsg.h>"

As there's no bsg.h under $LFS/usr/include/linux, so I was sure that this error was caused by C header file loss. Checking with:
root:/sources/udev-166# /lib/libc.so.6
I can see that Glibc was 2.13, and GCC was 4.5.2:

"
GNU C Library stable release version 2.13, by Roland McGrath et al.
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
Compiled by GNU CC version 4.5.2.
Compiled on a Linux 2.6.22 system on 2011-09-04.
Available extensions:
crypt add-on version 2.1 by Michael Glad and others
GNU Libidn by Simon Josefsson
Native POSIX Threads Library by Ulrich Drepper et al
BIND-8.2.3-T5B
libc ABIs: UNIQUE IFUNC
For bug reporting instructions, please see:
<http://www.gnu.org/software/libc/bugs.html>.
"

After some searching work on google, I can conclude that this was caused by GCC version. I was not going to rebuilt gcc for this, so I tried get this header file and put it under $LFS/usr/include/linux/bsg.h. You can go to http://lxr.free-electrons.com/source/include/linux/bsg.h?v=2.6.25 to download the header file.

After copy & paster & chmod, I ran make again, and it succeeded.

Categories: IT Architecture, Kernel, Linux, Systems Tags:

Relationship between san’s HA DA FA RA WWN ZONE

September 15th, 2011 2 comments

Here's the image(Thanks to Peter G):

 

EMC Symmetrix connectivity to a host in a switched fabric logical and physical architecture

Here's another image of SAN architecture of EMC symmetrix arrays:

 

 

 

 

 

 

Or you may want to download the doc one:
Logicalandphysicalsanconnectivity.doc (1)

Categories: Hardware, Storage Tags:

Want your ldap password never expired? Here goes the howto

September 6th, 2011 2 comments

First, let's check when your password will expire using ldapsearch:
root on testserver:/tmp # ldapsearch -D cn="Directory Manager" -h ldap.testserver.com -b ou=people,dc=testserver,dc=com uid=liandy passwordexpirationtime
Enter bind password:
version: 1
dn: uid=liandy,ou=people,dc=testserver,dc=com
passwordexpirationtime: 20111005230540Z

ldapsearch -D "cn='Directory Manager'" -h ldap.testserver.com -b ou=people,dc=testserver,dc=com uid=liandy passwordexpirationtime #this should work also
Now, let's create a file named passwd.dn with content:
dn:uid=liandy,ou=people,dc=testserver,dc=com
changetype:modify
replace:passwordexpirationtime
passwordexpirationtime:20120612135450Z
And the last step is to change the expiration time using ldapmodify:
root on testserver:/tmp # ldapmodify -D cn="Directory Manager" -h ldap.testserver.com -f passwd.dn
Enter bind password:
modifying entry uid=liandy,ou=people,dc=testserver,dc=com
That's all the steps you need to change ldap password expiration time. To verify this has taken effect, fire ldapsearch to show expiration time just as the first step:
root on testserver:/tmp # ldapsearch -D cn="Directory Manager" -h ldap.testserver.com -b ou=people,dc=testserver,dc=com uid=liandy passwordexpirationtime
Enter bind password:
version: 1
dn: uid=liandy,ou=people,dc=testserver,dc=com
passwordexpirationtime: 20120612135450Z
So you can see the ldap expiration time has been extended to 20120612.
NB:
If you want to change password for liandy on ldap.testserver.com, do the following:
Create a file named passwd2.dn with content:
In passwd2.dn:
dn:uid=liandy,ou=people,dc=testserver,dc=com
changetype:modify
replace:userPassword
userPassword:EnterYourPassword

Then run ldapmodify to modify the password:
ldapmodify -D cn="Directory Manager" -h ldap.testserver.com -f passwd2.dn

If you want to modify directory manager's password, here goes the step:
Create a file passwd3.dn with content:
dn: cn=config
changetype: modify
replace: nsslapd-rootpw
nsslapd-rootpw: EnterYourPassword

Then run ldapmodify to change the password:
ldapmodify -D "cn=directory manager" -h ldap.testserver.com -f passwd3.dn

If you forget the password for directory manager, you then need firstly find dse.ldap under ldap/slapd-Portal1/config, then encrypt your password, then modify nsslapd-rootpw.

Categories: IT Architecture, Linux, Systems, Unix Tags:

hp openview monitoring solution – enable disable policies

August 18th, 2011 2 comments

This document is part of the “ESM – How To” Guides and gives instructions on how to Disable and Enable monitoring policies on Servers managed by Openview for Unix (OVOU), Openview for Windows (OVOW) and opsview for blackout or maintenance purposes.

Monitoring disabling / enabling process may differ depending what solution is used for monitoring the servers involved in the DR test.

This document list procedures to disable and enable monitoring in different monitoring solutions.

1.Monitoring changes to servers monitored in OVOU(Unix/Linux)
1.1. Disabling monitoring templates
Disable monitoring templates on the monitored server.
Logon to server and execute following command as root user
/opt/OV/bin/opctemplate –d -all

Above command will disable all monitoring policies on the server.

Now disable openview agent heartbeat polling from Openview servers(Server Side):
Execute following commands to disable OVO agent heartbeat polling
/opt/OV/bin/OpC/opchbp –stop <name of server>

1.2. Enabling monitoring templates
Enable monitoring templates on the monitored server.
Logon to server and execute following command as root user
/opt/OV/bin/opctemplate –e -all

Above command will enable all monitoring policies on the server.

Now enable openview agent heartbeat polling from Openview servers(Server Side):
Execute following commands to enable OVO agent heartbeat polling
/opt/OV/bin/OpC/opchbp –start <name of server>

2.Disabling monitoring for servers monitored in OVOW(Windows)
Firstly, access the Web Console of OV:

http://yourserver/OVOWeb/

2.1 Disabling policy(s)
Select the Policies Icon on the left hand panel and then expand the Nodes by selecting the select the plus next to the nodes in the center panel. Once you have located the server that you want to disable policies on select it and the policy inventory is shown in the right hand panel. The policies to be disabled are selected using the check boxes next to the policy name, or to disable all the policies select the check box next to the name, and this will select all the policies. Then select the Disable tab and the policies will be disabled, the display will show Active and pending until all the policies have been disabled, use the Refresh tab to update the screen.

2.2 Enabling policy(s)
Please refer to 2.1 Disabling policy(s)

3.Monitoring changes for servers monitored in Opsview(Nagios)
Log on opsview through web, search and locate the host you want operate on, Click on the grey arrow on the right side of the node and choose “Schedule downtime”. Now you can choose the start and End time for the downtime for this server.

obu firmware patching for sun T2000/T5120/T5220 servers OBP patching

August 17th, 2011 No comments

Don't know why there's some formatting errors with this article, you can download the PDF version here: obu firmware patching for sun T2000-T5120-T5220 servers OBP patching

You can download Sun_System_Firmware-6_7_11-Sun_Fire_T2000 from support.oracle.com or here(For SunFireT2000)

Categories: Hardware, Servers Tags:

hpux san storage configure howto steps for veritas vxvm

August 15th, 2011 No comments

1/             Run  ioscan -fnkC disk  & dump to a file

2/            run ioscan no options and dump to a file

3/            run ioscan -fnkC disk  and check for disk

devices that are not claimed.

4/            run  insf -ev to allocated the new entries to /dev/dsk/xxxxx

5/            vxdisksetup  / vxdiskadm to setup disks

6/           vxdctl enable

7/            vxvm - add disks to dg etc

Categories: Hardware, Storage Tags:

Emcgrab download url

July 27th, 2011 1 comment

Just go to ftp://ftp.emc.com/pub/emcgrab/Unix/ to select yours and download(solaris, linux, aix etc.). The latest version is emcgrab-4.4.4.

PS:

emcgrab is also known as Remediation HEAT report. EMC grab report generates file that EMC or storage team might use for diagnostics and monitoring of storage usage by the client and possible issues; it also dumps logs/config into this file. This procedure proved to be safe, but need to be done out of hours if that`s possible.

correctable ecc event detected by cpu0/1/3/4

July 20th, 2011 No comments

If you receive these kinds of alerts in Solaris, it means your server has Memory Dimm issue. Please check with hrdconf/madmin(HP-UX) or prtconf(SUN solaris) to see the error message.

For more information about ECC memory, you can refer to the following article: http://en.wikipedia.org/wiki/ECC_memory

convert squid access.log time to human-readable format

July 13th, 2011 No comments

If you're looking at squid access.log, you'll find the first column of it is the time, for example:
1310313302.640 829 58.215.75.51 TCP_MISS/200 104 CONNECT smtp.126.com:25 - DIRECT/123.125.50.112 -
1310313303.484 1845 58.215.75.51 TCP_MISS/200 104 CONNECT smtp.126.com:25 - DIRECT/123.125.50.111 -
This is not very human readable. Now we can use perl to convert squid access.log time to human-readable format, just one line command:

cat /var/log/squid/access.log |perl -nwe's/^(\d+)/localtime($1)/e; print'

After this, it'll look like this:
Sun Jul 10 09:55:03 2011.484 1845 58.215.75.51 TCP_MISS/200 104 CONNECT smtp.126.com:25 - DIRECT/123.125.50.111 -
Sun Jul 10 09:55:07 2011.146 355 88.80.10.1 TCP_MISS/200 1176 GET http://spam-chaos.com/pp/set-cookie.php - DIRECT/88.80.10.1 text/html
Hey, more human-readable, right?

Resolved – ld.so.1: httpd: fatal: libaprutil-0.so.0: open failed: No such file or directory

July 9th, 2011 No comments

When I tried to start up IBM http server on my Solaris, I met this error message:
ld.so.1: httpd: fatal: libaprutil-0.so.0: open failed: No such file or directory
Killed
This error was caused by library libaprutil-0.so.0 can not be found in the current library path environment variable. We should add /apps/IBMHTTPD/Installs/IHS61-01/lib to LD_LIBRARY_PATH to make libaprutil-0.so.0 be seen by ld.
Here goes the resolution:
#export LD_LIBRARY_PATH=/apps/IBMHTTPD/Installs/IHS61-01/lib:$LD_LIBRARY_PATH #set environment variable
#/apps/IBMHTTPD/Installs/IHS61-01/bin/httpd -d /apps/IBMHTTPD/Installs/IHS61-01 -d /apps/IBMHTTPD/Installs/IHS61-01 -f /apps/IBMHTTPD/Installs/IHS61-01/conf/ihs.conf -k restart #restart IHS
#/usr/ucb/ps auxww|grep -i ibmhttpd #check the result

PS:

Here's more about ld.so: (from book <Optimizing Linux Performance: A Hands-On Guide to Linux Performance Tools>)

When a dynamically linked application is executed, the Linux loader, ld.so, runs first.ld.so loads all the application’s libraries and connects symbols that the application uses with the functions the libraries provide. Because different libraries were originally linked at different and possibly overlapping places in memory, the linker needs to sort through all the symbols and make sure that each lives at a different place in memory. When a symbol is moved from one virtual address to another, this is called a relocation. It takes time for the loader to do this, and it is much better if it does not need to be done at all. The prelink application aims to do that by rearranging the system libraries of the entire systems so that they do not overlap. An application with a high number of relocations may not have been prelinked.

The Linux loader usually runs without any intervention from the user, and by just executing a dynamic program, it is run automatically. Although the execution of the loader is hidden from the user, it still takes time to run and can potentially slow down an application’s startup time. When you ask for loader statistics, the loader shows the amount of work it is doing and enables you to figure out whether it is a bottleneck.

The ld command is invisibly run for every Linux application that uses shared libraries. By setting the appropriate environment variables, we can ask it to dump information about its execution. The following invocation influences ld execution:
env LD_DEBUG=statistics,help LD_DEBUG_OUTPUT=filename <command>

How to check BerkeleyDB version info

June 22nd, 2011 No comments

How to check BerkeleyDB version info? Just run this command:

#grep DB_VERSION_STRING /usr/include/db4/db.h

#define DB_VERSION_STRING       "Sleepycat Software: Berkeley DB 4.3.29: (September 19, 2009)"

Then your BerkeleyDB version is 4.3.29.

If there's no /usr/include/db4 directory, you may check to see whether there's /usr/include/db3 or even /usr/include/db2 directory in your server. And run the respective command with db4 substituted by db3 or db2.

mkfile/dd to create a file with specific size

June 21st, 2011 3 comments

Now assume you want to create a file with 10M space:
Under Solaris:
root@beisoltest02 / # mkfile 10m disk1.img
root@beisoltest02 / # ls -lh disk1.img -rw------T   1 root     root         10M Jun 22 00:41 disk1.img

Under Linux:
[root@beivcs02 downloads]#  dd if=/dev/zero of=disk1.img bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.01096 seconds, 957 MB/s
[root@beivcs02 downloads]# ls -lh disk1.img
-rw-r--r-- 1 root root 10M Jun 22 00:37 disk1.img

PS:

There are many other options you can use to control dd behavior, such as cache. You can refer to here and here.

Categories: IT Architecture, Linux, Systems Tags:

Nagios Check_nrpe/check_disk with error message DISK CRITICAL – /apps/yourxxx is not accessible: Permission denied

June 21st, 2011 1 comment

This was sometimes because the user under which nagios runs by had no read permission to the file systems check_disk is going to check.

For example, if you received alert:

DISK CRITICAL - /apps/vcc/logs/way is not accessible: Permission denied

You then can log on your server, under root, run:

/usr/local/nagios/libexec/check_disk -p /apps/vcc/logs/way, you may see:

DISK OK - free space: /apps/vcc/logs/way 823 MB (21% inode=90%);| /=2938MB;;3966;0;3966

But when you run this under user nagios, you may see DISK CRITICAL again.

Resolution:

Grant read permission to the filesystem/directory that had problem.

 

DNS flush on linux/windows/mac

June 17th, 2011 No comments
  • Mac:
  • sudo dscacheutil -flushcache
  • Linux:
  • Here're ways to flush dns cache on linux/windows/mac:
    On linux:
    rndc flush
    or
    /etc/init.d/nscd restart
    On Windows:
    ipconfig /flushdns

    On Mac:

    sudo dscacheutil -flushcache

    You can install DNS cache plug-in to automatically flush DNS cache for you if you have Firefox installed.

     

    Categories: IT Architecture, Linux, Systems Tags:

    How to resize/shrink lvm home/root partition/filesystem dynamically

    June 8th, 2011 2 comments

    When you're trying to extend root file system under lvm environment, here are the steps:

    1.Extend the root volume:

    #lvextend -L500G /dev/VolGroup00/apps-yourapp

    2.Grow file system:

    #resize2fs /dev/VolGroup00/apps-yourapp 500G

    Shrinking can’t be done that easily and requires an umount. To shrink the FS, do the following:

    umount <logical-volume-device>
    e2fsck -f <logical-volume-device>
    resize2fs <logical-volume-device> <new-size-minus-50MB>
    lvreduce -L <new-size> <logical-volume-device>

    This procedure is for normal(not root) file systems. But what should we do when we want to shrink root/home partition/file system?

    You need go to linux rescue mode using a bootable device(Such as the first cd of your linux distro), type linux rescue, and when there jumps out an alert says whether to mount system to /mnt/sysimage, select skip.

    When you're in the shell of rescue mode, do the following steps:

    1.lvm vgchange -a y

    2.e2fsck -f /dev/VolGroup00/LogVol00

    3.resize2fs -f /dev/VolGroup00/LogVol00 500G

    4.lvm lvreduce -L500G /dev/VolGroup00/LogVol00

    Then reboot your system, and after it starts up, running df -h to check the result. And using vgs to see VSize and VFree.

    Enjoy.

     

     

    Interesting tty/pty/pts experiment

    June 8th, 2011 No comments

    Now, open two tabs on the same server using xshell, on each tab(session), run tty:

    root@testserver# tty

    /dev/pts/7

    root@testserver# tty

    /dev/pts/25

    First experiment - echo an string between ttys

    From /dev/pts/7, run:

    root@testserver# echo hi>/dev/pts/25

    Then, from the other tab(session), you're sure to see:

    root@testserve# hi

    To be continued...

    Grow/shrink vxfs file system(or volume) dynamically using vxresize

    June 5th, 2011 No comments

    I've already use vxassist creating disk group, making file system,  and mounting it to the OS and that partition is now in use. How can I grou/shrink  the size of vxvm filesystem dynamically ?

    Three steps:

    1.Which disk group does the file system use?

    In my scenario, I've /user which is the mount point of volume user_vol, and that volume belongs to andy_li disk group. As I created them, I've a clear mind about it. Then, how can I know which disk group does a file system/volume belongs to?

    #df -h /user

    /dev/vx/dsk/andy_li/user_vol                      1.0G   18M  944M   2% /user

    Now you can see, /user file system belongs to andy_li disk group.

    2. Now let's check how many space left in the disk group that we can use for growing /user:

    #vxdg -g andy_li free

    GROUP        DISK         DEVICE       TAG          OFFSET    LENGTH    FLAGS

    andy_li      andy_li01    sdb          sdb          3121152   999168 -

    999168 blocks, that's about 500MB.

    3. The last and most important thing is to grow the file system:

    /etc/vx/bin/vxresize -b -g andy_li user_vol +999168 alloc=andy_li01

    Ok, after this operation, let's check the file system's size again:

    #df -h /user/

    Filesystem            Size  Used Avail Use% Mounted on

    /dev/vx/dsk/andy_li/user_vol                      1.5G 18M  1.4G   2% /user

    That's all. And vice versa, You can use minus(-) instead of plus(+) to shrink the file system.

    Categories: Hardware, Storage Tags:

    Symantec Netbackup 7.1 reference manual(commands) download

    May 30th, 2011 No comments

    Extend filesystem in vxvm which connects to SAN fibre channel storage

    May 28th, 2011 No comments

    Firstly, please refer to http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/scanning-storage-interconnects.html for some pre-checking, like memory usage, sync etc.
    Then, we should scan for new disk connected to hba one by one(issue_lip, Scenario:fabric on Linux with Emulex hbas), we should check dmpnodes are ALL in ENABLED state before moving on to another hba. This is because when scanning for new disks, we expect to result in disabling paths on that controller – and moving on only when the paths are confirmed enabled again. And during all procedures, tail -f /var/log/messages would help.

    1) Checked the messages file for any existing issues. Identified and eliminated concerns of I/O error messages which have occured for some time:
    May 29 04:11:05 testserver01 kernel: end_request: I/O error, dev sdft, sector 0^M
    May 29 04:11:05 testserver01 kernel: Buffer I/O error on device sdft, logical block 0^M
    May 29 04:11:05 testserver01 kernel: end_request: I/O error, dev sdft, sector 0^M
    /var/log/messages.1:May 27 22:48:04 testserver01 kernel: end_request: I/O error, dev sdjt, sector 8
    /var/log/messages.1:May 27 22:48:04 testserver01 kernel: Buffer I/O error on device sdjt3, logical block 1
    2)Saved some output for comparison later:
    syminq -pdevfile > /var/tmp/syminq-pdevfile-prior
    We expected device as hyper 2C27 so I looked for this device in the output and, as expected, did not find it.
    vxdisk -o  alldgs list > /var/tmp/vxdisk-oalldgs.list
    3)Checked the current state of the vm devices dmp subpaths:
    for disk in `vxdisk -q list | awk '{print $1}'`
    do
    echo $disk
    vxdmpadm getsubpaths dmpnodename=${disk}
    done | tee -a /var/tmp/getsubpaths.out
    Checked the output to make sure all but the root disk(expected) had two enabled paths: NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME  ENCLR-TYPE   ENCLR-NAME    ATTRS
    ================================================================================
    cciss/c0d0   ENABLED(A)   -          c360       OTHER_DISKS  OTHER_DISKS
    sda
    NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME  ENCLR-TYPE   ENCLR-NAME    ATTRS
    ================================================================================
    sda          ENABLED(A)   -          c0         EMC          EMC0             -
    sddc         ENABLED(A)   -          c1         EMC          EMC0             -
    sdae
    NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME  ENCLR-TYPE   ENCLR-NAME    ATTRS
    ================================================================================
    sdeh         ENABLED(A)   -          c1         EMC          EMC0             -
    sdp          ENABLED(A)   -          c0         EMC          EMC0             -
    sdaf
    NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME  ENCLR-TYPE   ENCLR-NAME    ATTRS
    ================================================================================
    sdej         ENABLED(A)   -          c1         EMC          EMC0             -
    sdq          ENABLED(A)   -          c0         EMC          EMC0             -
    sdai
    NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME  ENCLR-TYPE   ENCLR-NAME    ATTRS
    ================================================================================
    sdai         ENABLED(A)   -          c0         EMC          EMC0             -
    sdfr         ENABLED(A)   -          c1         EMC          EMC0             -
    ... etc
    Ran other commands like
    vxdg list (to ensure all diskgroups where enabled)
    df -k (no existing filesystem problems)

    NOTE:

    You can get your  <dmpnodename>  by running:
    # vxdisk path | grep emcC5D3
    Note, it is listed as DANAME (and not SUBPATH)
    4)Tried to scan the first scsi bus
    root@testserver01# pwd
    /sys/class/scsi_host/host0
    root@testserver01# echo '- - -' > scan
    Waited to see if the scan would detect any new devices. Monditor the messages log for any messages relating to the scan.
    Checked output of syminq and checked the subpaths for each dmpnode as above. All remained ENABLED and no change.
    5. Moved on to issuing a force lip to the fibre path
    a. issuing the forcelip
    root@testserver01# pwd
    /sys/class/fc_host/host0
    root@testserver01# echo "1" > issue_lip
    The command returned and I monitored the messages log. Waited for the disabled path messages to appear (as expected):
    May 31 02:44:49 testserver01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x80 belonging to the dmpnode 201/0x540
    May 31 02:44:49 testserver01 kernel: VxVM vxdmp V-5-0-112 disabled path 67/0xa0 belonging to the dmpnode 201/0x5f0
    May 31 02:44:49 testserver01 kernel: VxVM vxdmp V-5-0-112 disabled path 67/0xb0 belonging to the dmpnode 201/0x6c0
    May 31 02:44:49 testserver01 kernel: VxVM vxdmp V-5-0-112 disabled path 128/0x60 belonging to the dmpnode 201/0x160
    ... etc
    I checked the output of vxdmpadm getsubpaths for each device to confirm the paths had gone offline:
    NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME  ENCLR-TYPE   ENCLR-NAME    ATTRS
    ================================================================================
    sddw         ENABLED(A)   -          c0         EMC          EMC0             -
    sdiv         ENABLED(A)   -          c1         EMC          EMC0             -
    sddm
    NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME  ENCLR-TYPE   ENCLR-NAME    ATTRS
    ================================================================================
    sdbg         DISABLED     -          c0         EMC          EMC0             -
    sdgq         ENABLED(A)   -          c1         EMC          EMC0             -
    sddn
    NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME  ENCLR-TYPE   ENCLR-NAME    ATTRS
    ================================================================================
    sddz         ENABLED(A)   -          c0         EMC          EMC0             -
    sdix         ENABLED(A)   -          c1         EMC          EMC0             -
    sddo
    NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME  ENCLR-TYPE   ENCLR-NAME    ATTRS
    ================================================================================
    sdbh         DISABLED     -          c0         EMC          EMC0             -
    sdgr         ENABLED(A)   -          c1         EMC          EMC0             -
    sddp

    Not all the paths were affected at once - a few minutes wait confirms they all go down as expected. The secondary path remains ENABLED, as expected.
    I waited a little longer to get some estimation for how long all the paths took to go down - estimate was around 4 minutes.
    b. Rescaning the fibre channel.
    Before moving to the other path I get volume manager to rescan the device bus to trigger dmp to wake up the DISABLED paths.
    root@testserver01# vxdisk scandisks fabric
    I wait until all the primary paths become ENABLED again, checking for the dmp enabled messages in the messages log:
    May 31 02:49:43 testserver01 kernel: VxVM vxdmp V-5-0-148 enabled path 129/0x90 belonging to the dmpnode 201/0x10
    May 31 02:49:43 testserver01 kernel: VxVM vxdmp V-5-0-148 enabled path 129/0x50 belonging to the dmpnode 201/0x20
    May 31 02:49:43 testserver01 kernel: VxVM vxdmp V-5-0-148 enabled path 128/0x40 belonging to the dmpnode 201/0x30
    May 31 02:49:43 testserver01 kernel: VxVM vxdmp V-5-0-148 enabled path 129/0x30 belonging to the dmpnode 201/0x40
    .... etc
    And also check vxdmpadm getsubpaths command until all primary paths return to ENABLED state.
    I did the same for the second host controlller at /sys/class/fc_host/host1
    6)Checking for new disk device:
    Try syminq and/or symcfg disco

    After this, you can extend vxvm filesystem to the size you want, and re-import devices after that. Please refer to http://www.doxer.org/extending-filesystems-on-lvm-vxvmvxfs-how-to/ for more details.

    panic cpu thread page_unlock is not locked issue when using centos xen to create solaris 10

    May 25th, 2011 No comments

    Don't panic.

    You can allocate more memory to solaris virtual machine(like 1024Mb) and try again.

    In the Sun Forums thread, they say that 609 MB is the lowest you can go. You can give it a little more memory size if allowed.

    Use xming, xshell, putty, tightvnc to display linux gui on windows desktop (x11 forwarding when behind firewall)

    May 24th, 2011 10 comments

    Q1:How do I run X11 applications through Xming when there's no firewall?

    Step 1 - Configure Xming

    Let's assume that you want to run xclock on solaris/linux server 192.168.0.3, and want the gui display on your pc whose ip is 192.168.0.4.

    Firstly, download xming, install it on your windows pc system.

    You can go to http://sourceforge.net/projects/xming/files/ to download.

    After this, you need set 192.168.0.3(linux/solaris) to the allowed server list on your windows. Edit X0.hosts which locates at the installation directory of xming(For example, C:\Program Files\Xming\X0.hosts), add a new entry in it:192.168.0.3, the ip address of linux/solaris that you want to run x11 utility from.

    Then, restart xming(C:\Program Files\Xming\xming.exe) on your windows.

    Step 2 - Connect to remote host, configure it, and run X11 application

    Log in linux/solaris server 192.168.0.3. Set environment variable DISPLAY to the ip address of your windows, and append a :0 to it:

    #export DISPLAY=192.168.0.4:0

    Then you must allow X11 forwarding in sshd configuration file. That is, set X11Forwarding to yes in /etc/ssh/sshd_config and restart your sshd daemon.

     And on solaris/linux server(192.168.0.3), run a X11 programe, like

    /usr/bin/xclock #or /usr/openwin/bin/xclock on solaris

    You will then see a clock gui pop up in your windows pc.

    PS: You may need install xorg-x11-xauth on remote host sometimes if you met error starting up xclock

    Q2:How do I run X11 applications from remote host when that host is behind firewall?

    If the remote host is behind firewall, then the method above will not work as the communication will be blocked if no firewall exception implemented. To run X11 applications from remote host behind firewall, you can follow steps below:

    Step 1 - Configure Xming

    This step is the same as step 1 in Q1, but I'll paste it here for your convenience:

    Let's assume that you want to run xclock on solaris/linux server 192.168.0.3, and want the gui display on your pc whose ip is 192.168.0.4.

    Firstly, download xming, install it on your windows pc system.

    You can go to http://sourceforge.net/projects/xming/files/ to download.

    After this, you need set 192.168.0.3(linux/solaris) to the allowed server list on your windows. Edit X0.hosts which locates at the installation directory of xming(For example, C:\Program Files\Xming\X0.hosts), add a new entry in it:192.168.0.3, which is the ip address of linux/solaris that you want to run x11 utility from.

    Then, restart xming(C:\Program Files\Xming\xming.exe) on your windows.

    Step 2 - Configure X11 forwarding on putty/xshell

    For Xshell:

    After entering remote hostname and log on username on Xshell, now nn the Tunneling tab of Advanced SSH Options dialog box, check "Forward X11 Connections to:" and click on "X DISPLAY:" and enter "localhost:0.0" next to it.

    xshell_x11_forwarding

    For Putty:

    After entering remote hostname and log on username on putty, now unfold "Connection" on the left pane, unfold "SSH, and then select "X11". Later, check "Enable X11 forwarding" and enter "localhost:0.0" next to "X display location".

    putty_x11_forwarding

    Step 3 - Connect to remote host, configure it, and run X11 application

    Log in linux/solaris server 192.168.0.3. Set environment variable DISPLAY to localhost:0

    #export DISPLAY=localhost:0 #not 192.168.0.4:0 any more!

    Then you must allow X11 forwarding in sshd configuration file. That is, set X11Forwarding to yes in /etc/ssh/sshd_config and restart your sshd daemon.

    And on solaris/linux server(192.168.0.3), run a X11 programe, like

    /usr/bin/xclock #or /usr/openwin/bin/xclock on solaris

    You will then see a clock gui pop up in your windows pc.

    PS: You may need install xorg-x11-xauth on remote host sometimes if you met error starting up xclock

    Q3:How do I connect to remote host through vnc client(such as tightvnc)?

    In general, you need first install vnc-server on remote host, then configure vnc-server on remote host. Later install tightvnc client on your PC and connect to remote host.

    I'll show details about install vnc-server on remote host below:

    yum install gnome*

    yum grouplist
    yum groupinstall "X Window System" -y
    yum groupinstall "GNOME Desktop Environment" -y
    yum groupinstall "Graphical Internet" -y
    yum groupinstall "Graphics" -y
    yum install vnc-server
    echo "DESKTOP="GNOME"" > /etc/sysconfig/desktop
    sed -i.bak '/VNCSERVERS=/d' /etc/sysconfig/vncservers
    echo "VNCSERVERS=\"1:root\"" >> /etc/sysconfig/vncservers

    mkdir -p /root/.vnc
    vi /root/.vnc/xstartup

    #!/bin/sh

    # Uncomment the following two lines for normal desktop:
    # unset SESSION_MANAGER
    # exec /etc/X11/xinit/xinitrc

    [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
    [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
    xsetroot -solid grey
    vncconfig -iconic &
    #xterm -geometry 80×24+10+10 -ls -title "$VNCDESKTOP Desktop" &
    #twm &
    gnome-terminal &
    gnome-session &

    vncpasswd ~/.vnc/passwd #set password
    chmod 755 ~/.vnc ; chmod 600 ~/.vnc/passwd ; chmod 755 ~/.vnc/xstartup

    chkconfig --level 345 vncserver on
    chkconfig --list | grep vncserver
    service vncserver start

    Q4:What if I want one linux box acting as X server, another linux box as X client, and I'm not sitting behind the Linux X server(means I have to connect to the Linux X server through VNC)?

    This may sound complex, but it's simple actually.

    First, you need install vnc-server on the linux X server following steps in above "Q3: How do I connect to remote host through vnc client(such as tightvnc)?".

    Second, install tightvnc viewer on your windows box you sit behind and connect to the linux X server through it(xxx.xxx.xxx.xxx:1 or example). Run xhost + <linux X client> to enable access to X server.

    Then, on the linux X client, export DISPLAY to xxx.xxx.xxx.xxx:1 which is the linux X server. Run a X program such as xclock, and you'll see the clock displaying on the X interface on your windows tightvnc viewer.

    PS:

    1.You can change vncserver’s resolution through editing /usr/bin/vncserver, change the default "$geometry = "1024×768″;" to any one you like, for example "$geometry = "1600×900″;". You can also control each user’s vnc resolution setting through adding line like "VNCSERVERARGS[1]="-geometry 1600×900″" in /etc/sysconfig/vncservers

    2.For vnc server on ubuntu, you can refer to http://www.doxer.org/ubuntu-server-gnome-desktop-and-vncserver-configuration/

    Analysis of output by solaris format -> verify

    May 21st, 2011 No comments

    Here's the output of format -> verify command in my solaris10:

    format> verify

    Primary label contents:

    Volume name = < >
    ascii name =
    pcyl = 2609
    ncyl = 2607
    acyl = 2
    bcyl = 0
    nhead = 255
    nsect = 63
    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 1306 10.00GB (1306/0/0) 20980890
    1 var wm 1307 - 2351 8.01GB (1045/0/0) 16787925
    2 backup wu 0 - 2606 19.97GB (2607/0/0) 41881455
    3 stand wm 2352 - 2606 1.95GB (255/0/0) 4096575
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0

    Now, let's give it an analysis:

    • Part

    Solaris x86 has 9 slices for a disk, and for 8th and 9th, they're preserved by solaris.

    • Tag

    This is used to indicate the purpose of the slice. Possible values are:

    unassigned, boot, root, swap, usr, backup, stand, home, and public, private(The latter two are used by Sun StorEdge).

    • Flag

    wm - this slice is writable and mountable.

    wu - this slice is writable and unmountable.

    rm - this slice is readable and mountable.

    ru - this slice is readable and unmountable.

    • Cylinders

    This part shows the start and end cylinder number of the slice.

    • Size

    The size of the slice.

    • Blocks

    This shows the number of cylinders and sectors of the slice.

    Now, let's create a slice and mount the filesystem:

    root@test / # format
    Searching for disks...done

    AVAILABLE DISK SELECTIONS:
    0. c1t0d0
    /pci@0,0/pci15ad,1976@10/sd@0,0
    1. c1t1d0
    /pci@0,0/pci15ad,1976@10/sd@1,0
    Specify disk (enter its number): 1 #select this disk
    selecting c1t1d0
    [disk formatted]

    FORMAT MENU:
    disk - select a disk
    type - select (define) a disk type
    partition - select (define) a partition table
    current - describe the current disk
    format - format and analyze the disk
    fdisk - run the fdisk program
    repair - repair a defective sector
    label - write label to the disk
    analyze - surface analysis
    defect - defect list management
    backup - search for backup labels
    verify - read and display labels
    save - save new disk/partition definitions
    inquiry - show vendor, product and revision
    volname - set 8-character volume name
    ! - execute , then return
    quit
    format> partition #select partition to check and create new slice

    PARTITION MENU:
    0 - change `0' partition
    1 - change `1' partition
    2 - change `2' partition
    3 - change `3' partition
    4 - change `4' partition
    5 - change `5' partition
    6 - change `6' partition
    7 - change `7' partition
    select - select a predefined table
    modify - modify a predefined partition table
    name - name the current table
    print - display the current table
    label - write partition map and label to the disk
    ! - execute , then return
    quit
    partition> print #check slice topology
    Current partition table (original):
    Total disk cylinders available: 2607 + 2 (reserved cylinders)

    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 1306 10.00GB (1306/0/0) 20980890
    1 var wm 1307 - 2351 8.01GB (1045/0/0) 16787925
    2 backup wu 0 - 2606 19.97GB (2607/0/0) 41881455
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0

    partition> 3 #select an unassigned slice. It will be /dev/rdsk/c1t1d0s3 after saving to format.dat
    Part Tag Flag Cylinders Size Blocks
    3 unassigned wm 0 0 (0/0/0) 0

    Enter partition id tag[unassigned]: stand
    Enter partition permission flags[wm]:
    Enter new starting cyl[1]: 2352
    Enter partition size[0b, 0c, 2352e, 0.00mb, 0.00gb]: $
    partition> label #write label to disk
    Ready to label disk, continue? y

    partition> name #name the current table
    Enter table name (remember quotes): hah

    partition> quit

    FORMAT MENU:
    disk - select a disk
    type - select (define) a disk type
    partition - select (define) a partition table
    current - describe the current disk
    format - format and analyze the disk
    fdisk - run the fdisk program
    repair - repair a defective sector
    label - write label to the disk
    analyze - surface analysis
    defect - defect list management
    backup - search for backup labels
    verify - read and display labels
    save - save new disk/partition definitions
    inquiry - show vendor, product and revision
    volname - set 8-character volume name
    ! - execute , then return
    quit
    format> save #save new disk/partition definitions
    Saving new disk and partition definitions
    Enter file name["./format.dat"]:
    format> quit

    root@test / # newfs /dev/rdsk/c1t1d0s3 #create filesystem on newly created slice
    newfs: construct a new file system /dev/rdsk/c1t1d0s3: (y/n)? y
    Warning: 1474 sector(s) in last cylinder unallocated
    /dev/rdsk/c1t1d0s3: 4096574 sectors in 667 cylinders of 48 tracks, 128 sectors
    2000.3MB in 42 cyl groups (16 c/g, 48.00MB/g, 11648 i/g)
    super-block backups (for fsck -F ufs -o b=#) at:
    32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
    3149856, 3248288, 3346720, 3445152, 3543584, 3642016, 3740448, 3838880,
    3937312, 4035744
    root@test / # fstyp /dev/dsk/c1t1d0s3 #check the filesystem type
    ufs
    root@test / # mkdir /hah
    root@test / # mount /dev/dsk/c1t1d0s3 /hah #mount filesystem
    root@test / # cd /hah/
    root@test hah # touch aa #create a file to have a test
    root@test hah # ls #finished, congratulations!
    aa lost+found