Archive

Author Archive

Linux hostname domainname dnsdomainname nisdomainname ypdomainname

December 20th, 2011 No comments

Here’s just an excerpt from online man page of “domainname”:

NAME
hostname – show or set the system’s host name
domainname – show or set the system’s NIS/YP domain name
dnsdomainname – show the system’s DNS domain name
nisdomainname – show or set system’s NIS/YP domain name
ypdomainname – show or set the system’s NIS/YP domain name
hostname will print the name of the system as returned by the gethost-
name(2) function.

domainname, nisdomainname, ypdomainname will print the name of the sys-
tem as returned by the getdomainname(2) function. This is also known as
the YP/NIS domain name of the system.

dnsdomainname will print the domain part of the FQDN (Fully Qualified
Domain Name). The complete FQDN of the system is returned with hostname
–fqdn.

Sometime you may find a weird thing that you can use ldap verification to log on a client, but you can not sudo to root. Now you should consider run domainname to check whether it’s set to (none). If it does, you should consider set the domainname just using domainname command.

Categories: Kernel, Linux, Unix Tags:

perl modules install and uninstall and list – using cpan

November 24th, 2011 No comments

If you want to install perl module SOAP::Lite using cpan for example, here’s the command line:

perl -MCPAN -e ‘install SOAP::Lite’

To test whether the module has been installed or not, run this:

perl -MSOAP::Lite -e “print \”Module installed.\\n\”;”

Also, what about uninstallation?

Under windows:
if you have the activestate distro, go to shell/command prompt, enter PPM. When PPM opens type
“remove [package name]” this will uninstall that particular package for you. type “help remove” in PPM for more details.
Alternativly if you know what files came with the package, just delete them. Be aware that some modules will rely on other modules been installed so make sure there are no dependencies before you remove delete packages.

Under Linux:
/usr/local/lib/perl5/site_perl/5.8.X/Module/Module.pm
OR
/usr/local/lib/perl5/site_perl/5.8.X/Module.pm
depending on the module. (the path might be different on your system, check: perl -e ‘print join(“\n”, @INC);’ )
Also see:

http://www.cpan.org/misc/cpan-faq.html

Generally there is little reason to remove a module, probably why CPAN doesn’t provide this function.

Oracle views – Static Data Dictionary Views & Dynamic Performance Views

November 5th, 2011 1 comment

Views are customized presentations of data in one or more tables or other views. You can think of them as stored queries. Views do not actually contain data, but instead derive their data from the tables upon which they are based. These tables are referred to as the base tables of the view.

Similar to tables, views can be queried, updated, inserted into, and deleted from, with some restrictions. All operations performed on a view actually affect the base tables of the view. Views can provide an additional level of security by restricting access to a predetermined set of rows and columns of a table. They can also hide data complexity and store complex queries.

Many important views are in the SYS schema. There are two types: static data dictionary views and dynamic performance views. Complete descriptions of the views in theSYS schema are in Oracle Database Reference.

Static Data Dictionary Views

The data dictionary views are called static views because they change infrequently, only when a change is made to the data dictionary. Examples of data dictionary changes include creating a new table or granting a privilege to a user.

Many data dictionary tables have three corresponding views:

  • DBA_ view displays all relevant information in the entire database. DBA_ views are intended only for administrators.An example of a DBA_ view is DBA_TABLESPACES, which contains one row for each tablespace in the database.
  • An ALL_ view displays all the information accessible to the current user, including information from the schema of the current user, and information from objects in other schemas, if the current user has access to those objects through privileges or roles.An example of an ALL_ view is ALL_TABLES, which contains one row for every table for which the user has object privileges.
  • USER_ view displays all the information from the schema of the current user. No special privileges are required to query these views.An example of a USER_ view is USER_TABLES, which contains one row for every table owned by the user.

The columns in the DBA_ALL_, and USER_ views are usually nearly identical.

Dynamic Performance Views

Dynamic performance views monitor ongoing database activity. They are available only to administrators. The names of dynamic performance views start with the characters V$. For this reason, these views are often referred to as V$ views.

An example of a V$ view is V$SGA, which returns the current sizes of various System Global Area (SGA) memory components.

Categories: Databases Tags:

Partitioned Tables and Indexes & Compressed Tables of oracle database

November 5th, 2011 No comments
1.Partitioned Tables and Indexes

You can partition tables and indexes. Partitioning helps to support very large tables and indexes by enabling you to divide the tables and indexes into smaller and more manageable pieces called partitions. SQL queries and DML statements do not have to be modified to access partitioned tables and indexes. Partitioning is transparent to the application.

After partitions are defined, certain operations become more efficient. For example, for some queries, the database can generate query results by accessing only a subset of partitions, rather than the entire table. This technique (called partition pruning) can provide order-of-magnitude gains in improved performance. In addition, data management operations can take place at the partition level, rather than on the entire table. This results in reduced times for operations such as data loads; index creation and rebuilding; and backup and recovery.

Each partition can be stored in its own tablespace, independent of other partitions. Because different tablespaces can be on different disks, this provides a table structure that can be better tuned for availability and performance. Storing partitions in different tablespaces on separate disks can also optimize available storage usage, because frequently accessed data can be placed on high-performance disks, and infrequently retrieved data can be placed on less expensive storage.

Partitioning is useful for many types of applications that manage large volumes of data. Online transaction processing (OLTP) systems often benefit from improvements in manageability and availability, while data warehousing systems benefit from increased performance and manageability.

As with tables, you can partition an index. In most situations, it is useful to partition an index when the associated table is partitioned, and to partition the index using the same partitioning scheme as the table. (For example, if the table is range-partitioned by sales date, then you create an index on sales date and partition the index using the same ranges as the table partitions.) This is known as a local partitioned index. However, you do not have to partition an index using the same partitioning scheme as its table. You can also create a nonpartitioned, or global, index on a partitioned table.

2.Compressed Tables

Table Compression is suitable for both OLTP applications and data warehousing applications. Compressed tables require less disk storage and result in improved query performance due to reduced I/O and buffer cache requirements. Compression is transparent to applications and incurs minimal overhead during bulk loading or regular DML operations such as INSERT, UPDATE or DELETE.

Extending tmpfs’ed /tmp on Solaris 10(and linux) without reboot

November 3rd, 2011 No comments

Thanks to Eugene.

If you need to extend /tmp that is using tmpfs on Solaris 10 global zone (works with zones too but needs adjustments) and don’t want to undertake a reboot, here’s a tried working solution.

PLEASE BE CAREFUL, ONE ERROR HERE WILL KILL THE LIVE KERNEL!

echo “$(echo $(echo ::fsinfo | mdb -k | grep /tmp | head -1 | awk ‘{print $1}’)::print vfs_t vfs_data \| ::print -ta struct tmount tm_anonmax | mdb -k | awk ‘{print $1}’)/Z 0×20000″ | mdb -kw

Note the 0×20000. This number means new size will be 1GB. It is calculated like this: as an example, 0×10000 in hex is 65535, or 64k. The size is set in pages, each page is 8k, so resulting allocation size is 64k * 8k = 512m. 0×20000 is 1GB, 0×40000 is 2GB etc.

If the server has zones, you will see more then one entry in ::fsinfo, and you need to feed exact struct address to mdb. This way you can change /tmp size for individual zones, but this can only be done from global zone.

Same approach can probably be applied to older Solaris releases but will definitely need adjustments. Oh, and in case you care, on Linux it’s as simple as “mount -o remount,size=1G /tmp” :)

 

Categories: Kernel, Unix Tags:

why BST/DST(British Summer Time, Daylight Saving Time) achives the goal of saving energy

October 31st, 2011 No comments

Greenwich Mean Time

Tonight(30th October 2011), we’ll welcome the GMT(Greenwich Mean Time) and hug away BST(British Summer Time, Daylight Saving Time) at 01:59:59 30th October 2011.

Why BST/DST(British Summer Time, Daylight Saving Time) achives the goal of saving energy? As you may think I still am going to work 8 hours a day and 365 days a year(may be too much? :D). OK, here’s what I think.

As we all can experience, the sun arise earlier in Summer than in Autumn. So when you wake up in the morning, you’ll do not need turn on the light, that does save the energy. And another extreme example, one man sleeps all the day and play games all the night, and another man sleeps all the night and work all the day, which one do you think is more environmentally friendly? Actually, this was why Benjamin Franklin, the man on $100, suggest DST(Daylight Saving Time).

Benjamin Franklin

Benjamin Franklin

Categories: Life Tags:

List of build automation software

October 29th, 2011 No comments

Things like make/ant/hudson etc. See this url for details: http://en.wikipedia.org/wiki/List_of_build_automation_software

Categories: IT Architecture Tags:

vcs architecture overview

October 28th, 2011 No comments
vcs architecture overview

vcs architecture overview

Possible issue with latest version (6.8) of Oracle Explorer(vulnerability)

October 24th, 2011 No comments

I have been made aware of a potential issue with the latest version of explorer (6.8). The issue is caused because Explorer is calling a command (pcitool ) and that can cause a system to crash.
Check the version currently on your OS:
# cat /etc/opt/SUNWexplo/default/explorer|grep EXP_DEF_VERSION

EXP_DEF_VERSION=”6.8″

Alternatively, if you want get it in a script:

VER=$(nawk  -F= ‘/EXP_DEF_VERSION/ {print $NF}’ /etc/opt/SUNWexplo/default/explorer |sed ‘s/\”//g’)

There are a number of workarounds…
—————————————————————–
1) downgrade explorer to 6.7

2) comment the line 674 (‘get_cmd “/usr/sbin/pcitool -v” sysconfig/pcitool-v’) by adding ‘#’ to the beginning of it

3) make /usr/sbin/pcitool not executable on these systems (either remove it or change permissions)

PS:

For solaris explorer related concepts/download etc, please refer to http://blogs.oracle.com/PJ/entry/solaris_10_explorer_data_collection

relationship between dmx srdf bcv

October 24th, 2011 No comments

The R1 BCV is related to the R1 DEV the R1 is srdf’d to the R2. The R2 is related to the R2 BCV. There is no direct relationship between the R1dev and the R2 bcv.
The symdg’s as you know are contained in a local host based config database which group together devs you want to run commands on at the same time so you can fail them over a s a group or sync/split the bcv’s as a group without having to list all the devs.

DMX_SRDF_SAN_VCS

DMX_SRDF_SAN_VCS

R1BCV_R1DEV_R2BCV_R2DEV

R1BCV_R1DEV_R2BCV_R2DEV

Because you only talk to the local array you specify your DEV’s in your symdg as the ones that are local to you via symld. When symdgs are first created they are type R1 or R2 depending on if you’re going to add R1 or R2 devices. This will automatically update when you fail over. The devices you add to your symdg are always the ones that are on your local array. The rdf relationship does not need to be specified in the symdg it is inherent to the device as it is a 1-2-1 relationship so when you run a symrdf query it will find out the paired device from the array.
BCV’s are different because you can have more than one BCV attached to a device.
The –rdf in the symbcv command says bcv device <DEV> is remote i.e. is a bcv device on the remote array.
When you’re running your commands then you have to picture yourself (well you don’t but I do) standing locally on the host your running the commands on and think which devs appear local from there and which ones are remote.

Categories: Storage Tags:

Three types of 301 rewrite/redirect for apache httpd server

October 4th, 2011 No comments

Type 1:Exact URL without any query parameters:
www.doxer.org/portal/site/doxerorg/mydoxer/anyone
Type 2:Exact URL with any query parameters
www.doxer.org/portal/site/doxerorg/mydoxer/anyone?n=
Type 3:Exact URL plus query parameters and/or sub pages
www.doxer.org/portal/site/doxerorg/mydoxer/anyone / EVERYTHING AFTER THIS INCLUDED

For Type 1:
RewriteRule ^/portal/site/doxerorg/mydoxer/anyone$ http://www.doxer.org/shop/pc/anyone? [L,R=301,NC]
For Type 2:
RewriteCond %{QUERY_STRING} ^(.*)$
RewriteRule ^/portal/site/doxerorg/mydoxer/mydoxer/anyone/register$ http://www.doxer.org/shop/pc/anyone? [R=301,L,NC]
For Type 3:
RewriteCond %{QUERY_STRING} ^(.*)$
RewriteRule ^/portal/site/doxerorg/mydoxer/article http://www.doxer.org/mydoxer/latestnews? [R=301,L,NC]

Note:
1.In the destination URL, add ‘?’ to the end if you don’t want the query string to be auto-appended to the destination URL
2.[R] flag specifies a redirect, instead of the usual rewrite, [L] makes this the last rewrite rule to apply, [NC] means case insensitive

Categories: IT Architecture Tags:

Error extras/ata_id/ata_id.c:42:23: fatal error: linux/bsg.h: No such file or directory when compile LFS udev-166

September 21st, 2011 1 comment

For “Linux From Scratch – Version 6.8″, on part “6.60. Udev-166″, after configuration on udev, we need compile the package using make, but later, I met the error message like this:

“extras/ata_id/ata_id.c:42:23: fatal error: linux/bsg.h: No such file or directory”

After checking line 42 of ata_id.c under extras/ata_id of udev-166′s source file, I can see that:

“#include <linux/bsg.h>”

As there’s no bsg.h under $LFS/usr/include/linux, so I was sure that this error was caused by C header file loss. Checking with:
root:/sources/udev-166# /lib/libc.so.6
I can see that Glibc was 2.13, and GCC was 4.5.2:


GNU C Library stable release version 2.13, by Roland McGrath et al.
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
Compiled by GNU CC version 4.5.2.
Compiled on a Linux 2.6.22 system on 2011-09-04.
Available extensions:
crypt add-on version 2.1 by Michael Glad and others
GNU Libidn by Simon Josefsson
Native POSIX Threads Library by Ulrich Drepper et al
BIND-8.2.3-T5B
libc ABIs: UNIQUE IFUNC
For bug reporting instructions, please see:
<http://www.gnu.org/software/libc/bugs.html>.

After some searching work on google, I can conclude that this was caused by GCC version. I was not going to rebuilt gcc for this, so I tried get this header file and put it under $LFS/usr/include/linux/bsg.h. You can go to http://lxr.free-electrons.com/source/include/linux/bsg.h?v=2.6.25 to download the header file.

After copy & paster & chmod, I ran make again, and it succeeded.

Categories: Kernel, Linux Tags:

Relationship between san’s HA DA FA RA WWN ZONE

September 15th, 2011 2 comments

Here’s the image(Thanks to Peter ):

 

EMC Symmetrix connectivity to a host in a switched fabric logical and physical architecture

Here’s another image of SAN architecture of EMC symmetrix arrays:

Or you may want to download the doc one:
Logicalandphysicalsanconnectivity.doc (1)

Categories: Hardware, Storage Tags:

Want your ldap password never expired? Here goes the howto

September 6th, 2011 2 comments

First, let’s check when your password will expire using ldapsearch:
root on testserver:/tmp # ldapsearch -D cn=”Directory Manager” -h ldap.testserver.com -b ou=people,dc=testserver,dc=com uid=liandy passwordexpirationtime
Enter bind password:
version: 1
dn: uid=liandy,ou=people,dc=testserver,dc=com
passwordexpirationtime: 20111005230540Z

ldapsearch -D “cn=’Directory Manager’” -h ldap.testserver.com -b ou=people,dc=testserver,dc=com uid=liandy passwordexpirationtime #this should work also
Now, let’s create a file named passwd.dn with content:
dn:uid=liandy,ou=people,dc=testserver,dc=com
changetype:modify
replace:passwordexpirationtime
passwordexpirationtime:20120612135450Z
And the last step is to change the expiration time using ldapmodify:
root on testserver:/tmp # ldapmodify -D cn=”Directory Manager” -h ldap.testserver.com -f passwd.dn
Enter bind password:
modifying entry uid=liandy,ou=people,dc=testserver,dc=com
That’s all the steps you need to change ldap password expiration time. To verify this has taken effect, fire ldapsearch to show expiration time just as the first step:
root on testserver:/tmp # ldapsearch -D cn=”Directory Manager” -h ldap.testserver.com -b ou=people,dc=testserver,dc=com uid=liandy passwordexpirationtime
Enter bind password:
version: 1
dn: uid=liandy,ou=people,dc=testserver,dc=com
passwordexpirationtime: 20120612135450Z
So you can see the ldap expiration time has been extended to 20120612.
NB:
If you want to change password for liandy on ldap.testserver.com, do the following:
Create a file named passwd2.dn with content:
In passwd2.dn:
dn:uid=liandy,ou=people,dc=testserver,dc=com
changetype:modify
replace:userPassword
userPassword:EnterYourPassword

Then run ldapmodify to modify the password:
ldapmodify -D cn=”Directory Manager” -h ldap.testserver.com -f passwd2.dn

If you want to modify directory manager’s password, here goes the step:
Create a file passwd3.dn with content:
dn: cn=config
changetype: modify
replace: nsslapd-rootpw
nsslapd-rootpw: EnterYourPassword

Then run ldapmodify to change the password:
ldapmodify -D “cn=directory manager” -h ldap.testserver.com -f passwd3.dn

If you forget the password for directory manager, you then need firstly find dse.ldap under ldap/slapd-Portal1/config, then encrypt your password, then modify nsslapd-rootpw.

Categories: Linux, Unix Tags:

hp openview monitoring solution – enable disable policies

August 18th, 2011 2 comments

This document is part of the “ESM – How To” Guides and gives instructions on how to Disable and Enable monitoring policies on Servers managed by Openview for Unix (OVOU), Openview for Windows (OVOW) and opsview for blackout or maintenance purposes.

Monitoring disabling / enabling process may differ depending what solution is used for monitoring the servers involved in the DR test.

This document list procedures to disable and enable monitoring in different monitoring solutions.

1.Monitoring changes to servers monitored in OVOU(Unix/Linux)
1.1. Disabling monitoring templates
Disable monitoring templates on the monitored server.
Logon to server and execute following command as root user
/opt/OV/bin/opctemplate –d -all

Above command will disable all monitoring policies on the server.

Now disable openview agent heartbeat polling from Openview servers(Server Side):
Execute following commands to disable OVO agent heartbeat polling
/opt/OV/bin/OpC/opchbp –stop <name of server>

1.2. Enabling monitoring templates
Enable monitoring templates on the monitored server.
Logon to server and execute following command as root user
/opt/OV/bin/opctemplate –e -all

Above command will enable all monitoring policies on the server.

Now enable openview agent heartbeat polling from Openview servers(Server Side):
Execute following commands to enable OVO agent heartbeat polling
/opt/OV/bin/OpC/opchbp –start <name of server>

2.Disabling monitoring for servers monitored in OVOW(Windows)
Firstly, access the Web Console of OV:

http://yourserver/OVOWeb/

2.1 Disabling policy(s)
Select the Policies Icon on the left hand panel and then expand the Nodes by selecting the select the plus next to the nodes in the center panel. Once you have located the server that you want to disable policies on select it and the policy inventory is shown in the right hand panel. The policies to be disabled are selected using the check boxes next to the policy name, or to disable all the policies select the check box next to the name, and this will select all the policies. Then select the Disable tab and the policies will be disabled, the display will show Active and pending until all the policies have been disabled, use the Refresh tab to update the screen.

2.2 Enabling policy(s)
Please refer to 2.1 Disabling policy(s)

3.Monitoring changes for servers monitored in Opsview(Nagios)
Log on opsview through web, search and locate the host you want operate on, Click on the grey arrow on the right side of the node and choose “Schedule downtime”. Now you can choose the start and End time for the downtime for this server.

Categories: HA & HPC Tags:

obu firmware patching for sun T2000/T5120/T5220 servers OBP patching

August 17th, 2011 No comments

Don’t know why there’s some formatting errors with this article, you can download the PDF version here: obu firmware patching for sun T2000-T5120-T5220 servers OBP patching

You can download Sun_System_Firmware-6_7_11-Sun_Fire_T2000 from support.oracle.com or here(For SunFireT2000)

Categories: Servers Tags:

hpux san storage configure howto steps for veritas vxvm

August 15th, 2011 No comments

1/             Run  ioscan -fnkC disk  & dump to a file

2/            run ioscan no options and dump to a file

3/            run ioscan -fnkC disk  and check for disk

devices that are not claimed.

4/            run  insf -ev to allocated the new entries to /dev/dsk/xxxxx

5/            vxdisksetup  / vxdiskadm to setup disks

6/           vxdctl enable

7/            vxvm – add disks to dg etc

Categories: Hardware, Storage Tags:

check solaris 10/09 version info – update 6/7/8/9

August 4th, 2011 No comments

Using uname -a you can get the basic information currently available from the solaris system. For example, on my server:
root@beisoltest02 / # uname -a
SunOS beisoltest02 5.10 Generic_144489-12 i86pc i386 i86pc
However, sometimes you need check the “update version” of solaris. For example, in oracle documentation, your machine should has solaris 10 update 6 or higher if you want to install oracle 11 on your solaris host. So how can we check the “update version” of solaris?

Step 1:

#cat /etc/release
Solaris 10 10/08 s10x_u6wos_07b X86 #it’s update 6!
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 27 October 2008
Step 2:(more detailed info)

Compare solaris version history below:

Notable features of Solaris currently include DTraceDoorsService Management FacilitySolaris ContainersSolaris Multiplexed I/OSolaris Volume ManagerZFS, andSolaris Trusted Extensions.

Updates to Solaris versions are periodically released, such as Solaris 10 10/09.

In ascending order, the following versions of Solaris have been released:

Colour Meaning
Red Release no longer supported
Green Release still supported
Blue Future release
Solaris version SunOS version Release date End of support[39] Major new features
SPARC x86
1.x 4.1.x 1991–1994 - September 2003 SunOS 4 rebranded as Solaris 1 for marketing purposes. See SunOS article for more information.
2.0 5.0 June 1992 - January 1999 Preliminary release (primarily available to developers only), support for only the sun4c architecture. First appearance of NIS+.[40]
2.1 5.1 December 1992 May 1993 April 1999 Support for sun4 and sun4m architectures added; first Solaris x86 release. First Solaris 2 release to support SMP.
2.2 5.2 May 1993 - May 1999 SPARC-only release. First to support sun4d architecture. First to support multithreading libraries (UI threads API in libthread).[41]
2.3 5.3 November 1993 - June 2002 SPARC-only release. OpenWindows 3.3 switches from NeWS to Display PostScript and drops SunView support. Support added for autofs and CacheFSfilesystems.
2.4 5.4 November 1994 September 2003 First unified SPARC/x86 release. Includes OSF/Motif runtime support.
2.5 5.5 November 1995 December 2003 First to support UltraSPARC and include CDE, NFSv3 and NFS/TCP. Dropped sun4 (VMEbus) support. POSIX.1c-1995 pthreads added. Doors added but undocumented.[42]
2.5.1 5.5.1 May 1996 September 2005 Only release to support PowerPC platform; Ultra Enterprise support added; user and group IDs (uid_t, gid_t) expanded to 32 bits,[43] also included processor sets[44] and early resource management technologies.
2.6 5.6 July 1997 July 2006 Includes Kerberos 5, PAMTrueType fonts, WebNFS, large file support, enhanced procfs. SPARCserver 600MP series support dropped.[45]
7 5.7 November 1998 August 2008 The first 64-bit UltraSPARC release. Added native support for file system meta-data logging (UFS logging). Dropped MCA support on x86 platform. Last update was Solaris 7 11/99.[46]
8 5.8 February 2000 March 2012 Includes Multipath I/OSolaris Volume Manager,[47] IPMP, first support for IPv6 and IPsec (manual keying only), mdb modular debugger. Introduced Role-Based Access Control (RBAC); sun4c support removed. Last update is Solaris 8 2/04.[48]
9 5.9 May 28, 2002 January 10, 2003 October 2014 iPlanet Directory Server, Resource Manager, extended file attributesIKE IPsec keying, and Linux compatibility added; OpenWindows dropped, sun4d support removed. Most current update is Solaris 9 9/05.
10 5.10 January 31, 2005 - Includes x86-64 (AMD64/Intel 64) support, DTrace (Dynamic Tracing), Solaris ContainersService Management Facility (SMF) which replaces init.d scripts,NFSv4Least privilege security model. Support for sun4m and UltraSPARC I processors removed. Support for EISA-based PCs removed. Adds Java Desktop System (based on GNOME) as default desktop.[49]

  • Solaris 10 1/06 (known internally as “U1″) added the GRUB bootloader for x86 systems, iSCSI Initiator support and fcinfo command-line tool.
  • Solaris 10 6/06 (“U2″) added the ZFS filesystem.
  • Solaris 10 11/06 (“U3″) added Solaris Trusted Extensions and Logical Domains.
  • Solaris 10 8/07 (“U4″) added Samba Active Directory support,[50] IP Instances (part of the OpenSolaris Network Virtualization and Resource Control project),iSCSI Target support and Solaris Containers for Linux Applications (based on branded zones), enhanced version of the Resource Capping Daemon (rcapd).
  • Solaris 10 5/08 (“U5″) added CPU capping for Solaris Containers, performance improvements, SpeedStep support for Intel processors and PowerNow!support for AMD processors [51][52]
  • Solaris 10 10/08 (“U6″) added boot from ZFS and can use ZFS as its root file system. Solaris 10 10/08 also includes virtualization enhancements including the ability for a Solaris Container to automatically update its environment when moved from one system to another, Logical Domains support for dynamically reconfigurable disk and network I/O, and paravirtualization support when Solaris 10 is used as a guest OS in Xen-based environments such as Sun xVM Server.[53]
  • Solaris 10 5/09 (“U7″) added performance and power management support for Intel Nehalem processors, container cloning using ZFS cloned file systems, and performance enhancements for ZFS on solid-state drives.
  • Solaris 10 10/09 (“U8″) added user and group level ZFS quotas, ZFS cache devices and nss_ldap shadowAccount Support, improvements to patching performance.[54]
  • Solaris 10 9/10 (“U9″) added physical to zone migration, ZFS triple parity RAID-Z and Oracle Solaris Auto Registration.[55]
11 Express 2010.11 5.11 November 15, 2010 - Adds new packaging system (IPS=Image Packaging System) and associated tools, Solaris 10 Containers, network virtualization and QoS, virtual consoles, ZFS encryption and deduplication, updated GNOME. Removes Xsun, CDE.[56]

A more comprehensive summary of some Solaris versions is also available.[58] Solaris releases are also described in the Solaris 2 FAQ.[59]

Emcgrab download url

July 27th, 2011 1 comment

Just go to ftp://ftp.emc.com/pub/emcgrab/Unix/ to select yours and download(solaris, linux, aix etc.). The latest version is emcgrab-4.4.4.

PS:

emcgrab is also known as Remediation HEAT report. EMC grab report generates file that EMC or storage team might use for diagnostics and monitoring of storage usage by the client and possible issues; it also dumps logs/config into this file. This procedure proved to be safe, but need to be done out of hours if that`s possible.

Categories: Servers Tags:

correctable ecc event detected by cpu0/1/3/4

July 20th, 2011 No comments

If you receive these kinds of alerts in Solaris, it means your server has Memory Dimm issue. Please check with hrdconf/madmin(HP-UX) or prtconf(SUN solaris) to see the error message.

For more information about ECC memory, you can refer to the following article: http://en.wikipedia.org/wiki/ECC_memory

Categories: Kernel, Unix Tags:

convert squid access.log time to human-readable format

July 13th, 2011 No comments

If you’re looking at squid access.log, you’ll find the first column of it is the time, for example:
1310313302.640 829 58.215.75.51 TCP_MISS/200 104 CONNECT smtp.126.com:25 – DIRECT/123.125.50.112 -
1310313303.484 1845 58.215.75.51 TCP_MISS/200 104 CONNECT smtp.126.com:25 – DIRECT/123.125.50.111 -
This is not very human readable. Now we can use perl to convert squid access.log time to human-readable format, just one line command:

cat /var/log/squid/access.log |perl -nwe’s/^(\d+)/localtime($1)/e; print’

After this, it’ll look like this:
Sun Jul 10 09:55:03 2011.484 1845 58.215.75.51 TCP_MISS/200 104 CONNECT smtp.126.com:25 – DIRECT/123.125.50.111 -
Sun Jul 10 09:55:07 2011.146 355 88.80.10.1 TCP_MISS/200 1176 GET http://spam-chaos.com/pp/set-cookie.php – DIRECT/88.80.10.1 text/html
Hey, more human-readable, right?

Categories: Linux Tags:

Resolved – ld.so.1: httpd: fatal: libaprutil-0.so.0: open failed: No such file or directory

July 9th, 2011 No comments

When I tried to start up IBM http server on my Solaris, I met this error message:
ld.so.1: httpd: fatal: libaprutil-0.so.0: open failed: No such file or directory
Killed
This error was caused by library libaprutil-0.so.0 can not be found in the current library path environment variable. We should add /apps/IBMHTTPD/Installs/IHS61-01/lib to LD_LIBRARY_PATH to make libaprutil-0.so.0 be seen by ld.
Here goes the resolution:
#export LD_LIBRARY_PATH=/apps/IBMHTTPD/Installs/IHS61-01/lib:$LD_LIBRARY_PATH #set environment variable
#/apps/IBMHTTPD/Installs/IHS61-01/bin/httpd -d /apps/IBMHTTPD/Installs/IHS61-01 -d /apps/IBMHTTPD/Installs/IHS61-01 -f /apps/IBMHTTPD/Installs/IHS61-01/conf/ihs.conf -k restart #restart IHS
#/usr/ucb/ps auxww|grep -i ibmhttpd #check the result

PS:

Here’s more about ld.so: (from book <Optimizing Linux® Performance: A Hands-On Guide to Linux® Performance Tools>)

When a dynamically linked application is executed, the Linux loader, ld.so, runs first.ld.so loads all the application’s libraries and connects symbols that the application uses with the functions the libraries provide. Because different libraries were originally linked at different and possibly overlapping places in memory, the linker needs to sort through all the symbols and make sure that each lives at a different place in memory. When a symbol is moved from one virtual address to another, this is called a relocation. It takes time for the loader to do this, and it is much better if it does not need to be done at all. The prelink application aims to do that by rearranging the system libraries of the entire systems so that they do not overlap. An application with a high number of relocations may not have been prelinked.

The Linux loader usually runs without any intervention from the user, and by just executing a dynamic program, it is run automatically. Although the execution of the loader is hidden from the user, it still takes time to run and can potentially slow down an application’s startup time. When you ask for loader statistics, the loader shows the amount of work it is doing and enables you to figure out whether it is a bottleneck.

The ld command is invisibly run for every Linux application that uses shared libraries. By setting the appropriate environment variables, we can ask it to dump information about its execution. The following invocation influences ld execution:
env LD_DEBUG=statistics,help LD_DEBUG_OUTPUT=filename <command>

Backup by oracle on client xxx using policy ORACLE_PROD_DAILY_DB

July 6th, 2011 No comments

Sometimes you got error about netbackup hasn’t been done successfully and what’s weird was that no daily backup logs were created then. Actually, at that same time, the system log will show that xinetd is rejecting the connection:

xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4

Problem

GENERAL ERROR: Network errors during Oracle RMAN backups when using xinetd resulting in status 6, status 23, status 25, status 41, or status 58

Error

ORA-27028: skgfqcre: sbtbackup returned error

Solution

Overview:The xinetd process receives inbound socket connections from remote hosts and may reject the connections without starting the expected NetBackup process if there have recently been too many connections from the remote host.

Troubleshooting:

The problem may have many different NetBackup symptoms depending on the connecting process, target process, and connection method.  But in all cases, the destination process will not be started and will not create any debug log entries at the time of the expected connection.  At that same time, the system log will show that xinetd is rejecting the connection, e.g.

xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4

Log Files:

This bpbrm log shows a successful connection to bpcd followed by a forwarding socket request, both reach the client host and receive a file descriptor (fd).  However, xinetd closed the forwarding socket immediately as indicated by the errno 232.

08:20:38.890 [5871] <2> logconnections: BPCD CONNECT FROM 1.2.3.4.54105 TO 2.3.4.5.13724
...snip...
08:20:38.891 [5871] <2> vnet_connect_to_vnetd_extra: vnet_vnetd.c.179: msg: VNETD CONNECT FROM 1.2.3.4.54106 TO 2.3.4.5.13724 fd = 10
08:20:38.892 [5871] <2> vnet_pop_byte: vnet.c.184: errno: 232 0x000000e8
08:20:38.892 [5871] <2> vnet_pop_byte: vnet.c.185: Function failed: 43 0x0000002b
08:20:38.893 [5871] <2> vnet_pop_string: vnet.c.266: Function failed: 43 0x0000002b
08:20:38.893 [5871] <2> vnet_pop_signed: vnet.c.310: Function failed: 43 0x0000002b
08:20:38.893 [5871] <2> version_connect: vnet_vnetd.c.1812: Function failed: 43 0x0000002b
...snipped three retries from bpbrm to vnetd, all of them closed by xinetd...
08:20:41.943 [5871] <2> bpcr_vnetd_connect_forward_socket_begin: nb_vnetd_connect(2.3.4.5) failed: 25
08:20:41.943 [5871] <2> local_bpcr_connect: bpcr_vnetd_connect_forward_socket_begin failed: 25
08:20:41.943 [5871] <2> ConnectToBPCD: bpcd_connect_and_verify(myhost, myhost) failed: 25
08:20:41.944 [5871] <16> bpbrm start_bpcd_stat: cannot connect to myhost, Connection reset by peer (232)

The bpcd log shows a failure waiting for bpbrm to send the forwarding socket information.

08:20:38.912 [24709] <2> logconnections: BPCD ACCEPT FROM 1.2.3.4.54105 TO 2.3.4.5.13724
...snip...
08:20:39.002 [24709] <2> bpcd main: output socket port number = 1
08:20:41.942 [24709] <2> get_long: (2) premature end of file (byte 1)
08:20:41.942 [24709] <2> get_vnetd_forward_socket: get_string ipc_string failed: 5
08:20:41.942 [24709] <16> bpcd main: get_vnetd_forward_socket failed: 23

The client vnetd log does not show any activity at the time of the forwarding socket requests.

08:20:38.889 [24709] <2> launch_command: vnetd.c.2125: path: /usr/openv/netbackup/bin/bpcd
...no entries during this time span...
08:20:42.227 [24672] <2> vnet_send_network_socket: vnet_vnetd.c.1535: hash_str2: 7f9786a02bd04ad3f585c06892bd74c1

The system messages log shows that xinetd rejected the 4 vnetd connections.

Jul 16 08:20:38 myhost xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4
Jul 16 08:20:39 myhost xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4
Jul 16 08:20:40 myhost xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4
Jul 16 08:20:41 myhost xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4

Another example bpbrm connection failure attempt to bpcd to perform a comm file update.

<16> bpbrm send_info_via_progress_file: cannot connect to <clientname>, Operation now in progress (150)

Resolution:

This situation can be resolved by adjusting the xinetd configuration to permit sufficient per_source connections for NetBackup operations.  On some operating system platforms, the default per_source value can be as low as 1-10.  A NetBackup master/media server may rapidly make more than this many connections to the client host, depending on the type of backup and connection method; up to 4 sockets per concurrent Automatic backup job and 9 sockets per concurrent Application backup job.

Review the xinetd configuration documentation and make appropriate changes for the platform and expected number of simultaneous NetBackup operations.  After making the changes, send a SIGHUP to the xinetd process to force a read of the updated configuration.

If appropriate, the default xinetd settings for all services can be changed in the /etc/xinitid.conf file, e.g.

defaults
{
per_source = <new connection limit or UNLIMITED>
}

Alternatively, the xinetd configuration settings for specific services can be changed in service specific files, e.g. /etc/xinetd.d/vnetdcould contain the following setting.

service vnetd
{
per_source = <new connection limit or UNLIMITED>
}

The NetBackup services that are most likely to be affected are vnetd and bpcd.

Plan to minimally set the number of per_source connections to:
The number of channels or concurrent backup streams * 9 (using the higher level required for application backups) = the per_source minimum setting.

Example: Given a client running Oracle RMAN Application type backups (requiring up to 9 concurrent connections), with 8 channels, the minimum setting would be 72 for per_source.

Legacy ID

300810

From:http://www.symantec.com/business/support/index?page=content&id=TECH58196

Categories: Databases Tags:

How to check BerkeleyDB version info

June 22nd, 2011 No comments

How to check BerkeleyDB version info? Just run this command:

#grep DB_VERSION_STRING /usr/include/db4/db.h

#define DB_VERSION_STRING       “Sleepycat Software: Berkeley DB 4.3.29: (September 19, 2009)”

Then your BerkeleyDB version is 4.3.29.

If there’s no /usr/include/db4 directory, you may check to see whether there’s /usr/include/db3 or even /usr/include/db2 directory in your server. And run the respective command with db4 substituted by db3 or db2.

mkfile/dd to create a file with specific size

June 21st, 2011 3 comments

Now assume you want to create a file with 10M space:
Under Solaris:
root@beisoltest02 / # mkfile 10m disk1.img
root@beisoltest02 / # ls -lh disk1.img -rw——T   1 root     root         10M Jun 22 00:41 disk1.img

Under Linux:
[root@beivcs02 downloads]#  dd if=/dev/zero of=disk1.img bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.01096 seconds, 957 MB/s
[root@beivcs02 downloads]# ls -lh disk1.img
-rw-r–r– 1 root root 10M Jun 22 00:37 disk1.img

Categories: Linux Tags:

Nagios Check_nrpe/check_disk with error message DISK CRITICAL – /apps/yourxxx is not accessible: Permission denied

June 21st, 2011 1 comment

This was sometimes because the user under which nagios runs by had no read permission to the file systems check_disk is going to check.

For example, if you received alert:

DISK CRITICAL – /apps/vcc/logs/way is not accessible: Permission denied

You then can log on your server, under root, run:

/usr/local/nagios/libexec/check_disk -p /apps/vcc/logs/way, you may see:

DISK OK – free space: /apps/vcc/logs/way 823 MB (21% inode=90%);| /=2938MB;;3966;0;3966

But when you run this under user nagios, you may see DISK CRITICAL again.

Resolution:

Grant read permission to the filesystem/directory that had problem.

 

DNS flush on linux/windows/mac

June 17th, 2011 No comments
  • Mac:
  • sudo dscacheutil -flushcache
  • Linux:
  • Here’re ways to flush dns cache on linux/windows/mac:
    On linux:
    rndc flush
    or
    /etc/init.d/nscd restart
    On Windows:
    ipconfig /flushdns

    On Mac:

    sudo dscacheutil -flushcache

    You can install DNS cache plug-in to automatically flush DNS cache for you if you have Firefox installed.

     

    Categories: Linux Tags:

    How to resize/shrink lvm home/root partition/filesystem dynamically

    June 8th, 2011 2 comments

    When you’re trying to extend root file system under lvm environment, here are the steps:

    1.Extend the root volume:

    #lvextend -L500G /dev/VolGroup00/apps-yourapp

    2.Grow file system:

    #resize2fs /dev/VolGroup00/apps-yourapp 500G

    Shrinking can’t be done that easily and requires an umount. To shrink the FS, do the following:

    umount <logical-volume-device>
    e2fsck -f <logical-volume-device>
    resize2fs <logical-volume-device> <new-size-minus-50MB>
    lvreduce -L <new-size> <logical-volume-device>

    This procedure is for normal(not root) file systems. But what should we do when we want to shrink root/home partition/file system?

    You need go to linux rescue mode using a bootable device(Such as the first cd of your linux distro), type linux rescue, and when there jumps out an alert says whether to mount system to /mnt/sysimage, select skip.

    When you’re in the shell of rescue mode, do the following steps:

    1.lvm vgchange -a y

    2.e2fsck -f /dev/VolGroup00/LogVol00

    3.resize2fs -f /dev/VolGroup00/LogVol00 500G

    4.lvm lvreduce -L500G /dev/VolGroup00/LogVol00

    Then reboot your system, and after it starts up, running df -h to check the result. And using vgs to see VSize and VFree.

    Enjoy.

     

     

    Categories: Storage Tags:

    Interesting tty/pty/pts experiment

    June 8th, 2011 No comments

    Now, open two tabs on the same server using xshell, on each tab(session), run tty:

    root@testserver# tty

    /dev/pts/7

    root@testserver# tty

    /dev/pts/25

    First experiment – echo an string between ttys

    From /dev/pts/7, run:

    root@testserver# echo hi>/dev/pts/25

    Then, from the other tab(session), you’re sure to see:

    root@testserve# hi

    To be continued…

    Categories: Linux Tags: ,

    how to mount a file system within RVG volume

    June 8th, 2011 No comments

    This is a summary/excerpt of origin article http://www.symantec.com/business/support/index?page=content&id=TECH15207&key=15278&actp=LIST.

    Here goes the sample configuration:

    mount filesystem within vvr

    If you encountered error message likes the following when you tried mounting filesystem within vvr:

    # mount -F vxfs /dev/vx/dsk/srvmdg/repvol1 /repvol1
    vxfs mount: read of super-block on /dev/vx/dsk/srvmdg/repvol1 failed: No such device or address
    or
    # mount -F vxfs -o ro /dev/vx/dsk/srvmdg/repvol1 /repvol1
    vxfs mount: read of super-block on /dev/vx/dsk/srvmdg/repvol1 failed: No such device or address

    Then, what you should do is:

    To mount a file system on its corresponding RVG volume, it must meet the following criteria:
    - The RVG must be ENABLED
    - The RVG volume must be ENABLED
    - The file system must be mounted as read-only if on the secondary
    Categories: Storage Tags: ,