Archive

Author Archive

resolved – yum Error performing checksum Trying other mirror and finally No more mirrors to try

August 27th, 2015 No comments

Today when I was installing one package in Linux, below error prompted:

[root@testhost yum.repos.d]# yum list --disablerepo=* --enablerepo=yumpaas
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
yumpaas | 2.9 kB 00:00
yumpaas/primary_db | 30 kB 00:00
http://yumrepo.example.com/paas_oel5/repodata/b8e385ebfdd7bed69b7619e63cd82475c8bacc529db7b8c145609b64646d918a-primary.sqlite.bz2: [Errno -3] Error performing checksum
Trying other mirror.
yumpaas/primary_db | 30 kB 00:00
http://yumrepo.example.com/paas_oel5/repodata/b8e385ebfdd7bed69b7619e63cd82475c8bacc529db7b8c145609b64646d918a-primary.sqlite.bz2: [Errno -3] Error performing checksum
Trying other mirror.
Error: failure: repodata/b8e385ebfdd7bed69b7619e63cd82475c8bacc529db7b8c145609b64646d918a-primary.sqlite.bz2 from yumpaas: [Errno 256] No more mirrors to try.

The repo "yumpaas" is hosted on OEL 6 VM, which by default use sha2 for checksum. However, for OEL5 VMs(the VM running yum), yum uses sha1 by default. So I've WA this by install python-hashlib to extend yum's capability to handle sha2(python-hashlib from external repo EPEL).

[root@testhost yum.repos.d]# yum install python-hashlib

And after this, the problematic repo can be used. But to resolve this issue permanently without WA on OEL5 VMs, we should recreate the repo with sha1 for checksum algorithm(createrepo -s sha1).

resolved – Unable to locally verify the issuer’s authority

August 18th, 2015 No comments

Today we found below error in weblogic server log:

[2015-08-16 14:38:21,866] [WARN] [N/A] [testhost.example.com] [Thrown exception when ...... javax.net.ssl.SSLHandshakeException: General SSLEngine problem
......
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
......
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:323)

And we had a test on the linux server:

[root@testhost1 ~]# wget https://testurl.example.com/
--2015-08-17 10:15:56-- https://testurl.example.com/
Resolving testurl.example.com... 192.168.77.230
Connecting to testurl.example.com|192.168.77.230|:443... connected.
ERROR: cannot verify testurl.example.com's certificate, issued by `/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4':
Unable to locally verify the issuer's authority.
To connect to testurl.example.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.

And tested using openssl s_client:

[root@testhost ~]# openssl s_client -connect testurl.example.com:443 -showcerts
CONNECTED(00000003)
depth=1 /C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
verify error:num=2:unable to get issuer certificate
issuer= /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
verify return:0
---
Certificate chain
0 s:/C=US/ST=California/L=Redwood Shores/O=Test Corporation/OU=FOR TESTING PURPOSES ONLY/CN=*.testurl.example.com
i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
-----BEGIN CERTIFICATE-----
MIIFOTCCBCGgAwIBAgIQB1z+1jMRPPv/TrBA2qh5YzANBgkqhkiG9w0BAQsFADB+
MQswCQYDVQQGEwJVUzEdMBsGA
......
-----END CERTIFICATE-----
---
Server certificate
subject=/C=US/ST=California/L=Redwood Shores/O=Test Corporation/OU=FOR TESTING PURPOSES ONLY/CN=*.testurl.example.com
issuer=/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
---
No client certificate CA names sent
---
SSL handshake has read 1510 bytes and written 447 bytes
---
New, TLSv1/SSLv3, Cipher is AES128-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1
Cipher : AES128-SHA
Session-ID: 2B5E228E4705C12D7EE7B5C608AE5E3781C5626C1A99A8D60D21DB2350D4F4DE
Session-ID-ctx:
Master-Key: F06632FD67E22EFB1134CDC0EEE8B8B44972747D804D5C7969095FCB2A692E90F111DC4FC6081F4D94C7561BB556DA16
Key-Arg : None
Krb5 Principal: None
Start Time: 1439793731
Timeout : 300 (sec)
Verify return code: 2 (unable to get issuer certificate)
---

From above, the issue should be caused by CA file missing, as the SSL certificate is relatively new using sha2. So I determined to download CA file from symantec, both "Symantec Class 3 Secure Server CA - G4" and "VeriSign Class 3 Public Primary Certification Authority - G5".

[root@testhost ~]# wget --no-check-certificate 'http://symantec.tbs-certificats.com/SymantecSSG4.crt'

[root@testhost ~]# openssl version -a|grep OPENSSLDIR

OPENSSLDIR: "/etc/pki/tls"

[root@testhost ~]# cp /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak

[root@testhost ~]# openssl x509 -text -in "SymantecSSG4.crt" >> /etc/pki/tls/certs/ca-bundle.crt

And test wget again, but still the same error:

[root@testhost1 ~]# wget https://testurl.example.com/
--2015-08-17 10:38:59-- https://testurl.example.com/
Resolving myproxy.example.com... 192.168.19.20
Connecting to myproxy.example.com|192.168.19.20|:80... connected.
ERROR: cannot verify testurl.example.com's certificate, issued by `/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4':
unable to get issuer certificate
To connect to testurl.example.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.

Then I determined to install another CA indicated in the error log(G4 above may not be needed):

[root@testhost1 ~]# wget --no-check-certificate 'https://www.symantec.com/content/en/us/enterprise/verisign/roots/VeriSign-Class%203-Public-Primary-Certification-Authority-G5.pem'

[root@testhost1 ~]# openssl x509 -text -in "VeriSign-Class 3-Public-Primary-Certification-Authority-G5.pem" >> /etc/pki/tls/certs/ca-bundle.crt

This time, wget passed https:

[root@testhost1 ~]# wget https://testurl.example.com/
--2015-08-17 10:40:03-- https://testurl.example.com/
Resolving myproxy.example.com... 192.168.19.20
Connecting to myproxy.example.com|192.168.19.20|:80... connected.
Proxy request sent, awaiting response... 200 OK
Length: 11028 (11K) [text/html]
Saving to: `index.html'

100%[======================================>] 11,028 --.-K/s in 0.02s

2015-08-17 10:40:03 (526 KB/s) - `index.html' saved [11028/11028]

PS:
You can check more info here about adding trusted root certificates to the server - Mac OS X / Windows / Linux (Ubuntu, Debian) / Linux (CentOs 6) / Linux (CentOs 5)

resolved – passwd: User not known to the underlying authentication module

August 12th, 2015 No comments

Today we met the following error when tried to change one user's password:

[root@test ~]# echo 2cool|passwd --stdin test
Changing password for user test.
passwd: User not known to the underlying authentication module

And after some searching work, we found it's caused by /etc/shadow file missing:

[root@test ~]# ls -l /etc/shadow
ls: /etc/shadow: No such file or directory

To generate the /etc/shadow file, use pwconv command:

[root@test ~]# pwconv

[root@test ~]# ls -l /etc/shadow
-r-------- 1 root root 1254 Aug 11 12:13 /etc/shadow

After this, we can reset password without issue:

[root@test ~]# echo mypass|passwd --stdin test
Changing password for user test.
passwd: all authentication tokens updated successfully.

Categories: IT Architecture, Linux, Systems, Unix Tags:

resolved – ORA-27303: additional information: Invalid protocol requested (2) or protocol not loaded

August 6th, 2015 No comments

Today when I tried to start up crs after patching(crsctl start crs), the following error occurred in /u01/app/11.2.0.4/grid/log/test/alerttest.log:

2015-07-31 11:57:54.702:
[/u01/app/11.2.0.4/grid/bin/oraagent.bin(18654)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/scratch/app/11.2.0.4/grid/log/slcai081/agent/ohasd/oraagent_oracle/oraagent_oracle.log"
2015-07-31 11:57:59.894:
[/u01/app/11.2.0.4/grid/bin/oraagent.bin(18654)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/scratch/app/11.2.0.4/grid/log/slcai081/agent/ohasd/oraagent_oracle/oraagent_oracle.log"
2015-07-31 11:58:05.121:
[/u01/app/11.2.0.4/grid/bin/oraagent.bin(18654)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/scratch/app/11.2.0.4/grid/log/slcai081/agent/ohasd/oraagent_oracle/oraagent_oracle.log"
2015-07-31 11:58:10.322:
[ohasd(17944)]CRS-2807:Resource 'ora.asm' failed to start automatically.
2015-07-31 11:58:10.326:
[ohasd(17944)]CRS-2807:Resource 'ora.crsd' failed to start automatically.

And running crsctl check crs, I saw that CRS/EM were not up:

[root@test ~]# /u01/app/11.2.0.4/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager

Later I tried manually start up ora.crsd, but still failed:

[root@test ~]# /u01/app/11.2.0.4/grid/bin/crsctl start res ora.crsd -init
CRS-2672: Attempting to start 'ora.asm' on 'test'
CRS-5017: The resource action "ora.asm start" encountered the following error:
ORA-27504: IPC error creating OSD context
ORA-00600: internal error code, arguments: [OSDEP_INTERNAL], [], [], [], [], [], [], [], [], [], [], []
ORA-27302: failure occurred at: sskgxplp
ORA-27303: additional information: Invalid protocol requested (2) or protocol not loaded.
. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0.4/grid/log/test/agent/ohasd/oraagent_oracle//oraagent_oracle.log".
CRS-2674: Start of 'ora.asm' on 'test' failed
CRS-2679: Attempting to clean 'ora.asm' on 'test'
CRS-2681: Clean of 'ora.asm' on 'test' succeeded
CRS-4000: Command Start failed, or completed with errors.

From the output, it's complaining about "Invalid protocol requested". As this RAC is non-exadata, so we should use ipc_g protocol rather than ipc_rds which is for Exadata when relinking oracle binaries. So I made the following change in the script:

grep ins_rdbms.mk apply_BP_and_OneOffs.sh
su $OWNER -c "cd $1/rdbms/lib && export ORACLE_HOME=$1 && /usr/bin/make -f ins_rdbms.mk ipc_rds lbac_off dv_off ioracle > /dev/null
su $OWNER -c "cd $1/rdbms/lib && export ORACLE_HOME=$1 && /usr/bin/make -f ins_rdbms.mk ipc_rds ioracle > /dev/null"

After that, I rollbacked all patches, and re-run patching, and later all were ok.

Categories: Databases, IT Architecture, Oracle DB Tags:

remove usb disk from LVM

July 29th, 2015 No comments

On some servers, USB stick may become part of the LVM volume group. The usb is more prone to fail, which will cause big issues when they start. Also they work at a different speed than the drives, and this also causes performance issues.

For example, on one server, you can see below:

[root@test ~]# vgs
  VG      #PV #LV #SN Attr   VSize VFree
  DomUVol   2   1   0 wz--n- 3.77T    0

[root@test~]# pvs
  PV         VG      Fmt  Attr PSize PFree
  /dev/sdb1  DomUVol lvm2 a--  3.76T    0 #all PE allocated
  /dev/sdc1  DomUVol lvm2 a--  3.59G    0 #this is usb device

[root@test~]# lvs
  LV      VG      Attr   LSize Origin Snap%  Move Log Copy%  Convert
  scratch DomUVol -wi-ao 3.77T

[root@test ~]# df -h /scratch
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/DomUVol-scratch
                      3.7T  257G  3.3T   8% /scratch

[root@test~]# pvdisplay /dev/sdc1
  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               DomUVol
  PV Size               3.61 GB / not usable 14.61 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              115
  Free PE               0
  Allocated PE          115 #so physical extends are allocated on this usb device
  PV UUID               a8a0P5-AlCz-Cu5e-acC2-ldEQ-NPCn-kc4Du0

As you can see from above, as the USB device has PE allocated, so to remove that(pvreduce), we need first move PE from it to other PV. We can see the other PV has all space allocated(can also confirm from vgs above):

[root@test~]# pvdisplay /dev/sdb1
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               DomUVol
  PV Size               3.76 TB / not usable 30.22 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              123360
  Free PE               0
  Allocated PE          123360
  PV UUID               5IyCgh-JsiV-EnpO-XKj4-yxNq-pRjI-d7LKGy

Here are the steps for taking usb device out of VG:

umount /scratch
#fsck -y /dev/mapper/DomUVol-scratch
lvreduce --size -5G /dev/mapper/DomUVol-scratch
resize2fs /dev/mapper/DomUVol-scratch

[root@test ~]# vgs
  VG      #PV #LV #SN Attr   VSize VFree
  DomUVol   2   1   0 wz--n- 3.77T 5.00G

[root@test ~]# pvs
  PV         VG      Fmt  Attr PSize PFree
  /dev/sdb1  DomUVol lvm2 a--  3.76T 1.41G
  /dev/sdc1  DomUVol lvm2 a--  3.59G 3.59G #PEs on the usb device are all freed, if not, use pvmove /dev/sdc1

[root@test ~]# pvdisplay /dev/sdc1
  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               DomUVol
  PV Size               3.61 GB / not usable 14.61 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              115
  Free PE               115
  Allocated PE          0
  PV UUID               a8a0P5-AlCz-Cu5e-acC2-ldEQ-NPCn-kc4Du0

[root@test ~]# vgreduce DomUVol /dev/sdc1
  Removed "/dev/sdc1" from volume group "DomUVol"

[root@test ~]# pvs
  PV         VG      Fmt  Attr PSize PFree
  /dev/sdb1  DomUVol lvm2 a--  3.76T 1.41G
  /dev/sdc1          lvm2 a--  3.61G 3.61G #VG column is empty for the usb device, this confirms the usb device is taken out of VG

PS:

If you want to shrink lvm volume(lvreduce) on /, then you'll need go to linux rescue mode. Select "Skip" when the system prompts for the option of mounting / to /mnt/sysimage. Run "lvm vgchange -a y" first and then the other steps are more or less the same as above, but you'll need type "lvm" before any lvm command, such as "lvm lvs", "lvm pvs", "lvm lvreduce" etc.

resolved – yum install Error: Protected multilib versions

June 30th, 2015 No comments

Today when I tried to install firefox.i686 on Linux using yum, the following error occurred:

Protected multilib versions: librsvg2-2.26.0-14.el6.i686 != librsvg2-2.26.0-5.el6_1.1.x86_64
Error: Protected multilib versions: devhelp-2.28.1-6.el6.i686 != devhelp-2.28.1-3.el6.x86_64
Error: Protected multilib versions: ImageMagick-6.5.4.7-7.el6_5.i686 != ImageMagick-6.5.4.7-6.el6_2.x86_64
Error: Protected multilib versions: vte-0.25.1-9.el6.i686 != vte-0.25.1-8.el6_4.x86_64
Error: Protected multilib versions: polkit-gnome-0.96-4.el6.i686 != polkit-gnome-0.96-3.el6.x86_64
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

To resolve this, just run yum update <package names>, then the problem will go away.

Categories: IT Architecture, Linux, Systems, Unix Tags:

resolved – ORA-01102: cannot mount database in EXCLUSIVE mode

June 16th, 2015 No comments

Today when I tried to startup one RAC DB, it failed with ORA-01102:

[oracle@testvm ~]$ srvctl start database -d testdb -o "open"
PRCR-1079 : Failed to start resource ora.testdb.db
CRS-5017: The resource action "ora.testdb.db start" encountered the following error:
ORA-01102: cannot mount database in EXCLUSIVE mode
. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0.4/grid/log/testvm/agent/crsd/oraagent_oracle//oraagent_oracle.log".

CRS-2674: Start of 'ora.testdb.db' on 'testvm' failed
CRS-2632: There are no more servers to try to place resource 'ora.testdb.db' on that would satisfy its placement policy

SQL> startup
ORA-01102: cannot mount database in EXCLUSIVE mode

Later, I realized that the DB was still out of cluster mode:

SQL> show parameter cluster;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cluster_database boolean FALSE
cluster_database_instances integer 1
cluster_interconnects string

So I toke the following steps to take it into cluster mode:

SQL> alter system set cluster_database=true scope=spfile;

System altered.

SQL> alter system set cluster_database_instances=2 scope=spfile;

System altered.

After this, the DB started up normally.

Categories: Databases, IT Architecture, Oracle DB Tags:

generate a load on oracle database

June 5th, 2015 No comments

Sometimes you're doing a test of Oracle database, but the DB load is not high but you want the load go high to facilitate your testing. Here's the way:

DECLARE
 N NUMBER;
BEGIN
FOR I IN 1..100000 LOOP
 SELECT /*+ ORDERED USE_NL(C) FULL(C) FULL(S)*/ COUNT(*) INTO N
 FROM SH.SALES S, SH.CUSTOMERS C
 WHERE C.CUST_ID = S.CUST_ID AND CUST_FIRST_NAME='Sarah'
 ORDER BY TIME_ID;
 DBMS_LOCK.SLEEP(1);
END LOOP;
END;
/

You can press Ctrl+C to cancel it.

Categories: Databases, IT Architecture, Oracle DB Tags:

create a big table with one million lines on oracle database for testing

May 10th, 2015 No comments

#First, create tablespace along with datafile

SQL> create tablespace test datafile '/u01/app/oracle/product/11.2.0.4/dbhome_1/test/datafile1.dbf' size 512m;

#Then create table. We disable logging to avoid unnecessary redo data

SQL> create table bigtab tablespace test as select rownum id, a.* from all_objects a where 1=0;
SQL> alter table bigtab nologging;

#now populate the table named bigtab

DECLARE
  L_CNT NUMBER;
  L_ROWS NUMBER := 1000000;
BEGIN
  INSERT /*+ APPEND */ INTO BIGTAB SELECT ROWNUM, A.* FROM ALL_OBJECTS A;
  L_CNT := SQL%ROWCOUNT;
  COMMIT;
  WHILE (L_CNT < L_ROWS)
  LOOP
    INSERT /*+ APPEND */ INTO BIGTAB
    SELECT ROWNUM+L_CNT,
      OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID, OBJECT_TYPE, CREATED,
      LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY, NAMESPACE, EDITION_NAME
    FROM BIGTAB
      WHERE ROWNUM <= L_ROWS-L_CNT;
    L_CNT := L_CNT + SQL%ROWCOUNT;
    COMMIT;
   END LOOP;
END;
/

#check result

SQL> select count(*) from bigtab;

COUNT(*)
----------
1000000

#get the tracefile of the session

SQL> SELECT TRACEFILE FROM V$SESSION S, V$PROCESS P WHERE S.PADDR=P.ADDR AND S.SID=SYS_CONTEXT('USERENV','SID');

SQL> alter session set events '10046 trace name context forever, level 12';

#to get the maximum number of blocks that can be read

[oracle@testvm ~]$ grep scattered <trace file from above>

WAIT #139828744903832: nam='db file scattered read' ela= 1715 file#=10 block#=14192 blocks=8 obj#=56403 tim=1431233617613776
WAIT #139828744903832: nam='db file scattered read' ela= 6836 file#=10 block#=14268 blocks=8 obj#=56403 tim=1431233617620994

PS: 

More info is here.

Categories: Databases, Oracle DB Tags:

resolved – Checking for glibc-devel-2.12-1.7-i686; Not found. Failed

May 5th, 2015 No comments

Today when I tried to install Oracle EM Cloud Control 12c, below error prompted when pre-checking:

pre-check failed

So from above, we can see that it's complaining about missing package "glibc-devel-2.12-1.7-i686"(Checking for glibc-devel-2.12-1.7-i686; Not found. Failed). And I found there were glibc related packages on the system:

[root@testvm ~]# rpm -qa|grep glibc
glibc-common-2.12-1.149.el6_6.7.x86_64
glibc-devel-2.12-1.149.el6_6.7.x86_64
glibc-headers-2.12-1.149.el6_6.7.x86_64
glibc-2.12-1.149.el6_6.7.x86_64

But they were all x86_64 version, not the missing i686 one. So I determined to install i686 ones:

[root@testvm]# yum install -y glibc.i686 glibc-devel.i686 glibc-static.i686

After this, and press "Rerun", the check succeeded.

Categories: IT Architecture, Linux, Systems, Unix Tags:

install oracle instant client to use sqlplus on linux

April 30th, 2015 No comments

To use sqlplus to connect remotely to oracle database, you should have oracle database client installed on your box. To do this, you can follow below steps:

1. Download oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm and oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64.rpm from here.

2. Install oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm and oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64.rpm using rpm -i <package name>.

3. Set linux environment variables as below in ~/.bashrc:

export LANG=C
export HISTSIZE=100000
export HISTTIMEFORMAT="%h/%d - %H:%M:%S "
export ORACLE_HOME=/usr/lib/oracle/11.2/client64
export PATH=$PATH:/doxer/tools/bin:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/oracle/11.2/client64/lib
export PS1='[\u@\h-doxer \W]\$ '

4. Connect using sqlplus, e.g. sqlplus sys/pass@scan-example.test.com:1521/service1 as sysdba

Categories: Databases, IT Architecture, Oracle DB Tags:

raid10 and raid01

April 21st, 2015 No comments

RAID 0 over RAID 1(raid 0+1, raid 10, stripe of mirrors, better)

(RAID 1) A = Drive A1 + Drive A2 (Mirrored)
(RAID 1) B = Drive B1 + Drive B2 (Mirrored)
RAID 0 = (RAID 1) A + (RAID 1) B (Striped)

stripe-of-mirrors-raid10


RAID 1 over RAID 0(raid 1+0, raid01, mirror of stripes)

(RAID 0) A = Drive A1 + Drive A2 (Striped)
(RAID 0) B = Drive B1 + Drive B2 (Striped)
RAID 1 = (RAID 1) A + (RAID 1) B (Mirrored)
mirror-of-stripes

PS:

For write performance: raid0 > raid10 > raid5

The read performance should be all the same among all raid types.

Categories: Hardware, IT Architecture, Storage, Systems Tags:

printtbl8.sql – oracle sqlplus print output vertically

April 17th, 2015 No comments

Put the following as printtbl8.sql in $ORACLE_HOME/rdbms/admin/printtbl8.sql:

set serveroutput on
set linesize 200
declare
    l_theCursor    integer default dbms_sql.open_cursor;
    l_columnValue    varchar2(4000);
    l_status        integer;
    l_descTbl    dbms_sql.desc_tab;
    l_colCnt        number;
    procedure execute_immediate( p_sql in varchar2 )
    is
    BEGIN
        dbms_sql.parse(l_theCursor,p_sql,dbms_sql.native);
        l_status := dbms_sql.execute(l_theCursor);
    END;
begin
    execute_immediate( 'alter session set nls_date_format=
                        ''dd-mon-yyyy hh24:mi:ss'' ');
    dbms_sql.parse(    l_theCursor,
                    replace( '&1', '"', ''''),
                    dbms_sql.native );
    dbms_sql.describe_columns( l_theCursor,
                            l_colCnt, l_descTbl );
    for i in 1 .. l_colCnt loop
        dbms_sql.define_column( l_theCursor, i,
                                l_columnValue, 4000 );
    end loop;
    l_status := dbms_sql.execute(l_theCursor);
    while ( dbms_sql.fetch_rows(l_theCursor) > 0 ) loop
        for i in 1 .. l_colCnt loop
            dbms_sql.column_value( l_theCursor, i,
                                l_columnValue );
            dbms_output.put_line
                ( rpad( l_descTbl(i).col_name,
         35 ) || ': ' || l_columnValue );
        end loop;
        dbms_output.put_line( '-----------------' );
    end loop;
    execute_immediate( 'alter session set nls_date_format=
                        ''dd-MON-yy'' ');
exception
    when others then
        execute_immediate( 'alter session set
                        nls_date_format=''dd-MON-yy'' ');
        raise;
end;
/

Now you can have a test in oracle sqlplus:

SQL> @?/rdbms/admin/printtbl8.sql 'select name,LOG_MODE,OPEN_MODE from v$database'
old 17: replace( '&1', '"', ''''),
new 17: replace( 'select name,LOG_MODE,OPEN_MODE from v$database', '"', ''''),

NAME : TEST
LOG_MODE : ARCHIVELOG
OPEN_MODE : READ WRITE
-----------------

PL/SQL procedure successfully completed.

Cool, right?

Categories: Databases, IT Architecture, Oracle DB Tags:

resolved – file filelists.xml.gz [Errno 5] OSError: [Errno 2] No such file or directory [Errno 256] No more mirrors to try

April 8th, 2015 No comments

Today below error prompted when running yum install some packages in linux:

file://localhost/tmp/common1/x86_64/redhat/50/base/ga/Server/repodata/filelists.xml.gz: [Errno 5] OSError: [Errno 2] No such file or directory: '/tmp/common1/x86_64/redhat/50/base/ga/Server/repodata/filelists.xml.gz'
Trying other mirror.
Error: failure: repodata/filelists.xml.gz from base: [Errno 256] No more mirrors to try.
You could try running: package-cleanup --problems
package-cleanup --dupes
rpm -Va --nofiles --nodigest

After some checking(yum clean all, download repo to /etc/yum.repos.d, etc), I finally found it's caused by the following entries in /etc/yum.conf:

[base]
name=Red Hat Linux - Base
baseurl=file://localhost/tmp/common1/x86_64/redhat/50/base/ga/Server

After I commented them, yum install can work now.

 

Categories: IT Architecture, Linux, Systems, Unix Tags:

resolved – Starting MySQL.The server quit without updating PID file (/var/lib/mysql/testvm.pid).

April 3rd, 2015 No comments

Today when I tried to start mysql it failed with below error:

[root@testvm ~]# /etc/init.d/mysql start
Starting MySQL.The server quit without updating PID file (/var/lib/mysql/testvm.pid).

First I had a check of /var/lib/mysql/testvm.err, and it had below entries:

2015-04-03 00:11:39 2925 [Note] InnoDB: Using CPU crc32 instructions
/usr/sbin/mysqld: Can't create/write to file '/tmp/ibDvk6bb' (Errcode: 13 - Permission denied)
2015-04-03 00:11:39 7f28af6c6720 InnoDB: Error: unable to create temporary file; errno: 13
2015-04-03 00:11:39 2925 [ERROR] Plugin 'InnoDB' init function returned error.
2015-04-03 00:11:39 2925 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2015-04-03 00:11:39 2925 [ERROR] Unknown/unsupported storage engine: InnoDB
2015-04-03 00:11:39 2925 [ERROR] Aborting

I had a check of /tmp permission, and it's not correct:

[root@testvm ~]# ls -ld /tmp
drwx------ 19 root root 4096 Apr 3 07:15 /tmp

So I changed permission for /tmp to 777 with sticky bit:

[root@testvm ~]# chmod 1777 /tmp

[root@testvm ~]# ls -ld /tmp
drwxrwxrwt 19 root root 4096 Apr 3 07:15 /tmp

However, when I tried start mysql, it failed again with below errors in /var/lib/mysql/testvm.err:

2015-04-03 00:20:42 18724 [ERROR] InnoDB: auto-extending data file ./ibdata1 is of a different size 640 pages (rounded down to MB) than specified in the .cnf file: initial 768 pages, max 0 (relevant if non-zero) pages!
2015-04-03 00:20:42 18724 [ERROR] InnoDB: Could not open or create the system tablespace. If you tried to add new data files to the system tablespace, and it failed here, you should now edit innodb_data_file_path in my.cnf back to what it was, and remove the new ibdata files InnoDB created in this failed attempt. InnoDB only wrote those files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain your precious data!
2015-04-03 00:20:42 18724 [ERROR] Plugin 'InnoDB' init function returned error.
2015-04-03 00:20:42 18724 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2015-04-03 00:20:42 18724 [ERROR] Unknown/unsupported storage engine: InnoDB
2015-04-03 00:20:42 18724 [ERROR] Aborting

So it's all about InnoDB engine. As InnoDB was not required in our env, so I determined to disable InnoDB:

[root@testvm ~]# vi /etc/my.cnf
[mysqld]
innodb=OFF
ignore-builtin-innodb
skip-innodb
default-storage-engine=myisam
default-tmp-storage-engine=myisam

Later, the start of mysql succeeded.

Categories: Databases, IT Architecture, MySQL DB Tags:

change NIC configuration to make new VLAN tag take effect

April 2nd, 2015 No comments

Sometimes you may want to add vlan tag to existing NIC, and after the addition, you'll need to change DNS names bound to the old tag with new IPs in the newly added vlan tag. After all these two steps done, you'll need to make changes on the hosts(take linux for example) to make these changes into effect.

In this example, I'm going to move the old v118_FE to the new VLAN v117_FE.

ifconfig v118_FE down
ifconfig bond0.118 down
cd /etc/sysconfig/network-scripts
mv ifcfg-bond0.118 ifcfg-bond0.117
vi ifcfg-bond0.117
    DEVICE=bond0.117
    BOOTPROTO=none
    USERCTL=no
    ONBOOT=yes
    BRIDGE=v117_FE
    VLAN=yes
mv ifcfg-v118_FE ifcfg-v117_FE
vi ifcfg-v117_FE
    DEVICE=v117_FE
    BOOTPROTO=none
    USERCTL=no
    ONBOOT=yes
    STP=off
    TYPE=Bridge
    IPADDR=10.119.236.13
    NETMASK=255.255.248.0
    NETWORK=10.119.232.0
    BROADCAST=10.119.239.255
ifup v117_FE
ifup bond0.117
reboot

resolved – VPN Service not available, The VPN agent service is not responding. Please restart this application after a while.

March 30th, 2015 3 comments

Today when I tried to connect to VPN through Cisco AnyConnect Secure Mobility Client, the following error dialog prompted:

 

VPN Service not available

VPN Service not available

And after I clicked "OK" button, the following dialog prompted:

The VPN agent service is not responding

The VPN agent service is not responding

So all of the two dialogs were complaining about "VPN service" not available/not responding. So I ran "services.msc" in windows run and found below:

vpn service

vpn service

When I checked, the service "Cisco AnyConnect Secure Mobility Agent" was stopped, and the "Startup type" was "Manual". So I changed "Startup type" to "Automatic", click "Start", then "OK" to save.

After this, Cisco AnyConnect Secure Mobility Client was running ok and I can connect through it to VPN.

resolved – ORA-01013: user requested cancel of current operation

March 24th, 2015 No comments

ORA-01013: user requested cancel of current operation may occur in the following occasions:

  • All events blocking the shutdown do not occur within one hour, the shutdown operation aborts with the following message: ORA-01013: user requested cancel of current operation.
  • This message is also displayed if you interrupt the shutdown process, for example by pressing CTRL-C.
Categories: Databases, IT Architecture, Oracle DB Tags:

resolved – ext3: No journal on filesystem on disk

March 23rd, 2015 No comments

Today I met below error when trying to mount a disk:

[root@testvm ~]# mount /scratch
mount: wrong fs type, bad option, bad superblock on /dev/xvdb1,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

First I ran fsck -y /dev/xvdb1, but after it's done, the issue was still there(sometimes fsck -y /dev/xvdb1 could resolve this though). So as it suggested, I ran a dmesg | tail:

[root@testvm scratch]# dmesg | tail
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
NFSD: starting 90-second grace period
ext3: No journal on filesystem on xvdb1
ext3: No journal on filesystem on xvdb1
ext3: No journal on filesystem on xvdb1
ext3: No journal on filesystem on xvdb1

So from here we can see that the root cause for mounting failure was "ext3: No journal on filesystem on xvdb1". I first ran "fsck -y /dev/xvdb1", and try mount again. But the issue was still there. So I tried with adding ext3 journal on that disk:

[root@testvm qgomsdc1]# tune2fs -j /dev/xvdb1
tune2fs 1.39 (29-May-2006)
Creating journal inode:

done
This filesystem will be automatically checked every 20 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

After this, the mount succeeded.

Categories: IT Architecture, Kernel, Linux, Systems, Unix Tags:

resolved – ORA-00020: maximum number of processes (1000) exceeded

March 18th, 2015 No comments

Today I encountered ORA-12516 error when trying to access oracle database:

[root@client-doxer ~]# sqlplus tauser/password1@rac0102-r.example.com:1521/qainfac1

SQL*Plus: Release 11.2.0.3.0 Production on Tue Mar 17 07:31:04 2015

Copyright (c) 1982, 2011, Oracle. All rights reserved.

ERROR:
ORA-12516: TNS:listener could not find available handler with matching protocol
stack

Enter user-name:

Then I had a try of connecting using VIP instead of scan name, but it failed too:

[root@client-doxer ~]# sqlplus tauser/password1@rac0102-v.example.com:1521/qainfac1

SQL*Plus: Release 11.2.0.3.0 Production on Tue Mar 17 07:37:22 2015

Copyright (c) 1982, 2011, Oracle. All rights reserved.

ERROR:
ORA-12516: TNS:listener could not find available handler with matching protocol
stack

Enter user-name:

Then on the database server, I had a check of service qainfac1:

[root@rac01 crsd]# /u01/app/11.2.0.4/grid/bin/crsctl status res -t|grep -A5 ora.qainf1.db
ora.qainf1.db
1 ONLINE ONLINE rac01 Open
2 OFFLINE OFFLINE Instance Shutdown
ora.qainf1.qainfac1.svc
1 ONLINE ONLINE rac01
2 OFFLINE OFFLINE

So one instance is running fine. I tried sqlplus connection from local server:

[oracle@rac01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue Mar 17 07:45:39 2015

Copyright (c) 1982, 2013, Oracle. All rights reserved.

ERROR:
ORA-00020: maximum number of processes (1000) exceeded

Enter user-name: ^C

That's it, "ORA-00020: maximum number of processes (1000) exceeded". Then it's going to be a question of adjusting parameter PROCESSES. As parameter PROCESSES cannot be changed with ALTER SYSTEM unless a server parameter file was used to start the instance and the change takes effect in subsequent instances, so a bounce of instance is needed to activiate the new setting:

SQL> set lines 200
SQL> col NAME for a30
SQL> col VALUE for a40
SQL> select NAME,VALUE,ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE from v$parameter where name='processes';
NAME VALUE ISSES ISSYS_MOD ISINS
------------------------------ ---------------------------------------- ----- --------- -----
processes 1500 FALSE FALSE FALSE

SQL> show parameter processes;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes integer 1
db_writer_processes integer 3
gcs_server_processes integer 2
global_txn_processes integer 1
job_queue_processes integer 1000
log_archive_max_processes integer 4
processes integer 1000

In trace file /u01/app/oracle/diag/rdbms/qainf1/qainf12/trace/alert_qainf12.log, I can see below errors:

Unable to allocate flashback log of 51094 blocks from
current recovery area of size 214748364800 bytes.
Recovery Writer (RVWR) is stuck until more space
is available in the recovery area.
Unable to write Flashback database log data because the
recovery area is full, presence of a guaranteed
restore point and no reusable flashback logs.

Here's Fast Recovery Area info:

SQL> show parameter db_recovery_file_dest;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string +DATA
db_recovery_file_dest_size big integer 200G

And here's ASM diskgroup info:

[oracle@rac01 ~]$ export ORACLE_SID=+ASM2
[oracle@rac01 ~]$ export ORACLE_HOME=/u01/app/11.2.0.4/grid
[oracle@rac01 ~]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@rac01 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Wed Mar 18 02:27:13 2015

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> set lines 200;
SQL> select name, total_mb, free_mb, total_mb-free_mb used_mb from v$asm_diskgroup;

NAME TOTAL_MB FREE_MB USED_MB
------------------------------ ---------- ---------- ----------
DATA 4681689 533392 4148297

I had a check of restore points:

SQL> col NAME for a20
SQL> col time for a40
SQL> col SCN for 999999999999999
SQL> col STORAGE_SIZE for 999999999999999
SQL> SELECT NAME, SCN, TIME, DATABASE_INCARNATION#, GUARANTEE_FLASHBACK_DATABASE,STORAGE_SIZE FROM V$RESTORE_POINT;

NAME SCN TIME DATABASE_INCARNATION# GUA STORAGE_SIZE
-------------------- -------------- ---------------------------------------- --------------------- --- ------------
GRPT_BF_UPGR 14035000000000 03-MAR-15 04.16.12.000000000 PM 2 YES 214310000000

And I dropped the restore point to free space it's no longer needed to keep the restore point:

SQL> drop restore point GRPT_BF_UPGR;

After this, 214G space released from FRA and I can startup DB then set processes parameter to 1500(kill some processes with root by "ps -ef|grep <sid>" if sqlplus won't work even on local server):

SQL> alter system set processes=1500 scope=spfile;
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database flashback on;
SQL> alter database open;
SQL> select LOG_MODE,flashback_on from v$database;
LOG_MODE FLASHBACK_ON
------------ ------------------
ARCHIVELOG NO

Categories: Databases, IT Architecture, Oracle DB Tags:

sendmail DSN: Data format error

March 5th, 2015 No comments

If you met error when sending mail using sendmail(or linux mail/mailx command), then you should check /var/log/maillog for details. For example:

Mar 5 02:39:10 testhost1 sendmail[15281]: t252dAZr015281: from=root, size=78, class=0, nrcpts=1, msgid=<201503050239.t252dAZr015281@testhost1.us.example.com>, relay=root@localhost
Mar 5 02:39:10 testhost1 sendmail[15282]: t252dA8Z015282: from=<root@testhost1.us.example.com>, size=393, class=0, nrcpts=1, msgid=<201503050239.t252dAZr015281@testhost1.us.example.com>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1]
Mar 5 02:39:10 testhost1 sendmail[15281]: t252dAZr015281: to=user1@example.com, ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30078, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (t252dA8Z015282 Message accepted for delivery)
Mar 5 02:39:10 testhost1 sendmail[15284]: t252dA8Z015282: to=<user1@example.com>, ctladdr=<root@testhost1.us.example.com> (0/0), delay=00:00:00, xdelay=00:00:00, mailer=esmtp, pri=120393, relay=smtpserver1.example.com. [192.151.231.4], dsn=5.6.0, stat=Data format error
Mar 5 02:39:10 testhost1 sendmail[15284]: t252dA8Z015282: t252dA8Z015284: DSN: Data format error
Mar 5 02:39:10 testhost1 sendmail[15284]: t252dA8Z015284: to=<root@testhost1.us.example.com>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=31660, dsn=2.0.0, stat=Sent

From here, you can see that after relaying, the mail finally failed with DSN code 5.6.0(Delivery Status Notification extension of SMTP). So you should check the code details:

5 Permanent or Fatal error. This can be caused by a non existent email address, DNS problem, or your email was blocked by the receiving server.
X.6.0 - Other or undefined media error

X.6.0 Other or undefined media error Not given Something about the content of a message caused it to be considered undeliverable and the problem cannot be well expressed with any of the other provided detail codes.

PS:

For more info abotu DSN code, you can check http://www.inmotionhosting.com/support/email/email-troubleshooting/smtp-and-esmtp-error-code-list or http://tools.ietf.org/rfc/rfc3463.txt or http://www.iana.org/assignments/smtp-enhanced-status-codes/smtp-enhanced-status-codes.xml for details.

Categories: IT Architecture, Linux, Systems Tags:

TCP Window Scaling – values about TCP buffer size

February 4th, 2015 No comments

TCP Window Scaling(TCP socket buffer size, TCP window size)

/proc/sys/net/ipv4/tcp_window_scaling
/proc/sys/net/ipv4/tcp_rmem - memory reserved for TCP rcv buffers. minimum, initial and maximum buffer size
/proc/sys/net/ipv4/tcp_wmem - memory reserved for TCP snd buffers
/proc/sys/net/core/rmem_max - maximum receive window
/proc/sys/net/core/wmem_max - maximum send window

The following values (which are the defaults for 2.6.17 with more than 1 GByte of memory) would be reasonable for all paths with a 4MB BDP or smaller:

echo 1 > /proc/sys/net/ipv4/tcp_moderate_rcvbuf #autotuning enabled. The receiver buffer size (and TCP window size) is dynamically updated (autotuned) for each connection. (Sender side autotuning has been present and unconditionally enabled for many years now).
echo 108544 > /proc/sys/net/core/wmem_max
echo 108544 > /proc/sys/net/core/rmem_max
echo "4096 87380 4194304" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 16384 4194304" > /proc/sys/net/ipv4/tcp_wmem

Advanced TCP features

cat /proc/sys/net/ipv4/tcp_timestamps
cat /proc/sys/net/ipv4/tcp_window_scaling
cat /proc/sys/net/ipv4/tcp_sack

Here are some background knowledge:

  • The throughput of a communication is limited by two windows: the congestion window and the receive window. The former tries not to exceed the capacity of the network (congestion control) and the latter tries not to exceed the capacity of the receiver to process data (flow control). The receiver may be overwhelmed by data if for example it is very busy (such as a Web server). Each TCP segment contains the current value of the receive window. If for example a sender receives an ack which acknowledges byte 4000 and specifies a receive window of 10000 (bytes), the sender will not send packets after byte 14000, even if the congestion window allows it.
  • TCP uses what is called the "congestion window", or CWND, to determine how many packets can be sent at one time. The larger the congestion window size, the higher the throughput. The TCP "slow start" and "congestion avoidance" algorithms determine the size of the congestion window. The maximum congestion window is related to the amount of buffer space that the kernel allocates for each socket. For each socket, there is a default value for the buffer size, which can be changed by the program using a system library call just before opening the socket. There is also a kernel enforced maximum buffer size. The buffer size can be adjusted for both the send and receive ends of the socket.
  • To get maximal throughput it is critical to use optimal TCP send and receive socket buffer sizes for the link you are using. If the buffers are too small, the TCP congestion window will never fully open up. If the receiver buffers are too large, TCP flow control breaks and the sender can overrun the receiver, which will cause the TCP window to shut down. This is likely to happen if the sending host is faster than the receiving host. Overly large windows on the sending side is not usually a problem as long as you have excess memory; note that every TCP socket has the potential to request this amount of memory even for short connections, making it easy to exhaust system resources.
  • More about TCP Buffer Sizing is here.
  • More about /proc/sys/net/ipv4/* Variables is here.

resolved – TNS:listener does not currently know of service requested in connect descriptor

February 3rd, 2015 No comments

Today we found errors in weblogic log about datasource connection:

TNS:listener does not currently know of service requested in connect descriptor

And in our configuration, data source was using below info:

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=testrac-r.example.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=testservice)))

This was weird as it worked before. After some debugging, we found that the 3 IPs of scan name testrac-r.example.com behaved abnormally on RAC:

[root@rac1 ~]# /sbin/ifconfig|egrep -B1 '192.168.20.5|192.168.20.6|192.168.20.7'
v115_FE:3 Link encap:Ethernet HWaddr 00:21:28:F0:30:4C
inet addr:192.168.20.5 Bcast:10.245.87.255 Mask:255.255.248.0
--
v115_FE:4 Link encap:Ethernet HWaddr 00:21:28:F0:30:4C
inet addr:192.168.20.7 Bcast:10.245.87.255 Mask:255.255.248.0
--
v115_FE:5 Link encap:Ethernet HWaddr 00:21:28:F0:30:4C
inet addr:192.168.20.6 Bcast:10.245.87.255 Mask:255.255.248.0

[root@rac2 ~]# /sbin/ifconfig|egrep -B1 '192.168.20.5|192.168.20.6|192.168.20.7'
v115_FE:6 Link encap:Ethernet HWaddr 00:21:28:E8:3C:16
inet addr:192.168.20.7 Bcast:10.245.87.255 Mask:255.255.248.0
--
v115_FE:7 Link encap:Ethernet HWaddr 00:21:28:E8:3C:16
inet addr:192.168.20.6 Bcast:10.245.87.255 Mask:255.255.248.0

As showed above, 192.168.20.6 and 192.168.20.7 were up on both of the nodes. This behavior indicated scan name was somehow wrong. So we did a bounce of scan name service. And after that, the issue was gone.

Categories: Databases, IT Architecture, Oracle DB Tags:

Close Putty sessions without exit confirmation dialog

January 14th, 2015 No comments

You can set this in Putty "Change Settings" -> "Window" -> "Behaviour", and uncheck "Warn before closing window". Save the config in "Session", and now all windows can be closed without any exit confirmation dialog.

putty_session

Categories: Misc Tags:

resolved – su: cannot set user id: Resource temporarily unavailable

January 12th, 2015 No comments

When i try to log on as user "test", error occurred:

su: cannot set user id: Resource temporarily unavailable

I had a check of limits.conf:

[root@testvm ~]# cat /etc/security/limits.conf|egrep -v '^$|^#'
oracle   soft   nofile    131072
oracle   hard   nofile    131072
oracle   soft   nproc    131072
oracle   hard   nproc    131072
oracle   soft   core    unlimited
oracle   hard   core    unlimited
oracle   soft   memlock    50000000
oracle   hard   memlock    50000000
@svrtech    soft    memlock         500000
@svrtech    hard    memlock         500000
*   soft   nofile    131072
*   hard   nofile    131072
*   soft   nproc    131072
*   hard   nproc    131072
*   soft   core    unlimited
*   hard   core    unlimited
*   soft   memlock    50000000
*   hard   memlock    50000000

Then I had a check of the number of processes/threads with the maximum number of processes to see whether it's coming over the line:

[root@c9qa131-slcn03vmf0293 ~]# ps -eLF | grep test | wc -l
1026

So it's not exceeding. Then I had a check of open files:

[root@testvm ~]# lsof | grep aime | wc -l

6059

It's not exceeding 131072 either, then why the error "su: cannot set user id: Resource temporarily unavailable" was there? Actually the culprit was in file /etc/security/limits.d/90-nproc.conf:

[root@testvm ~]# cat /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

* soft nproc 1024
root soft nproc unlimited

After I modified 1024 to 131072, the issue gone away immediately.

Categories: IT Architecture, Kernel, Linux, Systems, Unix Tags:

resolved – Error: Unable to connect to xend: Connection reset by peer. Is xend running?

January 7th, 2015 No comments

Today I met some issue when trying to run xm commands on a XEN server:

[root@xenhost1 ~]# xm list
Error: Unable to connect to xend: Connection reset by peer. Is xend running?

I had a check, and found xend was actually running:

[root@xenhost1 ~]# /etc/init.d/xend status
xend daemon running (pid 8329)

After some debugging, I found it's caused by libvirtd & xend corrupted. And then I did a bounce of them:

[root@xenhost1 ~]# /etc/init.d/libvirtd restart
Stopping libvirtd daemon: [ OK ]
Starting libvirtd daemon: [ OK ]

[root@xenhost1 ~]# /etc/init.d/xend restart #this may not be needed 
restarting xend...
xend daemon running (pid 19684)

Later, the xm commands went good.

PS:

For more information about libvirt, you can check here.

 

Categories: Clouding, IT Architecture, Oracle Cloud Tags:

remove entries in perl array with specified value

December 30th, 2014 No comments

Assume that in array @array_filtered:

my @array_filtered = ("your", "array", "here", 1, 3, 8, "here", 2, 5, 9, "sit", "here",3, 4, 7,"yes","now",8,1,7,6); #or my @array_filtered=qw(your array here 1 3 8 here 2 5 9 sit here 3 4 7 yes now 8 1 7 6) which uses Alternative Quotes(q, qq, qw, qx)

You want to remove entries that have value "here" or "now" and it's following 3 entries, you can use splice:

#!/usr/bin/perl
my @array_filtered = ("your", "array", "here", 1, 3, 8, "here", 2, 5, 9, "sit", "here",3, 4, 7,"yes","now",8,1,7,6);
my @search_for = ("here","now");
#return keys that have specified value, =~/!~ for regular expression, eq/ne for string, ==/!= for number. or use unless()/if(not()). use m{} instead of // if there's too much / in the expression and you're tired of using \/ to escape them.

$search_for_s=join('|',@search_for);
@index_all = grep { $array_filtered[$_] =~ /$search_for_s/ } 0..$#array_filtered;

for($i=0;$i<=$#index_all;$i++) {
@index_all_one = grep { $array_filtered[$_] =~ /$search_for_s/ } 0..$#array_filtered;
splice(@array_filtered,$index_all_one[0],4);
#print $indexone."\n"
}

print "@array_filtered"."\n";

The output is "your array sit yes 6".

PS:

  • For more info about perl regular expression(such as operators<m, s, tr> and their modifiers, complex regular expression cheat sheet<.\s\S\d\D\w\W[aeiou][^aeiou](foo|bar), \G, $, $&, $`, $'> and more), you can refer to this article.
  • The following is about perl alternative quotes:

q// is generally the same thing as using single quotes - meaning it doesn't interpolate values inside the delimiters.
qq// is the same as double quoting a string. It interpolates.
qw// return a list of white space delimited words. @q = qw/this is a test/ is functionally the same as @q = ('this', 'is', 'a', 'test')
qx// is the same thing as using the backtick operators.

Categories: IT Architecture, Perl, Programming Tags:

resolved – cssh installation on linux server

December 29th, 2014 No comments

ClusterSSH can be used if you need controls a number of xterm windows via a single graphical console window, and you want to run commands interactively on multiple servers over an ssh connection. This guide will show the process to install clusterssh on a linux box from tarball.

At the very first, you should download cssh tarball App-ClusterSSH-4.03_04.tar.gz from sourceforge. You may need export proxy settings if it's needed in your env:

export https_proxy=http://my-proxy.example.com:80/
export http_proxy=http://my-proxy.example.com:80/
export ftp_proxy=http://my-proxy.example.com:80/

After the proxy setting, you can now get the package:

wget 'http://sourceforge.net/projects/clusterssh/files/latest/download'
tar zxvf App-ClusterSSH-4.03_04.tar.gz
cd App-ClusterSSH-4.03_04
cat README

Before installing, let's install some prerequisites packages:

yum install gcc libX11-devel gnome* -y
yum groupinstall "X Window System" -y
yum groupinstall "GNOME Desktop Environment" -y
yum groupinstall "Graphical Internet" -y
yum groupinstall "Graphics" -y

Now run "perl Build.PL" as indicated by README:

[root@centos-32bits App-ClusterSSH-4.03_04]# perl Build.PL
Can't locate Module/Build.pm in @INC (@INC contains: /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib/perl5/5.8.8/i386-linux-thread-multi /usr/lib/perl5/5.8.8 .) at Build.PL line 5.
BEGIN failed--compilation aborted at Build.PL line 5.

As it challenged, you need install Module::Build.pm first. Let's use cpan to install that module.

Run "cpan" and enter "follow" when below info occurred:

Policy on building prerequisites (follow, ask or ignore)? [ask] follow

If you had already ran cpan before, then you can configure the policy as below:

cpan> o conf prerequisites_policy follow
cpan> o conf commit

Now Let's install Module::Build:

cpan> install Module::Build

After the installation, let's run "perl Build.PL" again:

[root@centos-32bits App-ClusterSSH-4.03_04]# perl Build.PL
Checking prerequisites...
  requires:
    !  Exception::Class is not installed
    !  Tk is not installed
    !  Try::Tiny is not installed
    !  X11::Protocol is not installed
  build_requires:
    !  CPAN::Changes is not installed
    !  File::Slurp is not installed
    !  File::Which is not installed
    !  Readonly is not installed
    !  Test::Differences is not installed
    !  Test::DistManifest is not installed
    !  Test::PerlTidy is not installed
    !  Test::Pod is not installed
    !  Test::Pod::Coverage is not installed
    !  Test::Trap is not installed

ERRORS/WARNINGS FOUND IN PREREQUISITES.  You may wish to install the versions
of the modules indicated above before proceeding with this installation

Run 'Build installdeps' to install missing prerequisites.

Created MYMETA.yml and MYMETA.json
Creating new 'Build' script for 'App-ClusterSSH' version '4.03_04'

As the output says, run "./Build installdeps" to install the missing packages. Make sure you're in GUI env(through vncserver maybe), as "perl Build.PL" has a step to test GUI.

[root@centos-32bits App-ClusterSSH-4.03_04]# ./Build installdeps

......

Running Mkbootstrap for Tk::Xlib ()
chmod 644 "Xlib.bs"
"/usr/bin/perl" "/usr/lib/perl5/5.8.8/ExtUtils/xsubpp" -typemap "/usr/lib/perl5/5.8.8/ExtUtils/typemap" -typemap "/root/.cpan/build/Tk-804.032/Tk/typemap" Xlib.xs > Xlib.xsc && mv Xlib.xsc Xlib.c
make[1]: *** No rule to make target `pTk/tkInt.h', needed by `Xlib.o'. Stop.
make[1]: Leaving directory `/root/.cpan/build/Tk-804.032/Xlib'
make: *** [subdirs] Error 2
/usr/bin/make -- NOT OK
Running make test
Can't test without successful make
Running make install
make had returned bad status, install seems impossible

Errors again, we can see it's complaining something about TK related thing. To resolve this, I manully installed the latest perl-tk module as below:

wget --no-check-certificate 'https://github.com/eserte/perl-tk/archive/master.zip'
unzip master
cd perl-tk-master
perl Makefile.PL
make
make install

After this, let's run "./Build installdeps" and "perl Build.PL" again which all went through good:

[root@centos-32bits App-ClusterSSH-4.03_04]# ./Build installdeps

[root@centos-32bits App-ClusterSSH-4.03_04]# perl Build.PL

And let's run ./Build now:

[root@centos-32bits App-ClusterSSH-4.03_04]# ./Build
Building App-ClusterSSH
Generating: ccon
Generating: crsh
Generating: cssh
Generating: ctel

And now "./Build install" which is the last step:

[root@centos-32bits App-ClusterSSH-4.03_04]# ./Build install

After installation, let's have a test:

[root@centos-32bits App-ClusterSSH-4.03_04]# echo 'svr testserver1 testserver2' > /etc/clusters

Now run 'cssh svr', and you'll get the charm!

clusterssh

 

Categories: Clouding, IT Architecture, Linux, Systems, Unix Tags:

resolved – error:0D0C50A1:asn1 encoding routines:ASN1_item_verify:unknown message digest algorithm

December 17th, 2014 No comments

Today when I tried using curl to get url info, error occurred like below:

[root@centos-doxer ~]# curl -i --user username:password -H "Content-Type: application/json" -X POST --data @/u01/shared/addcredential.json https://testserver.example.com/actions -v

* About to connect() to testserver.example.com port 443

*   Trying 10.242.11.201... connected

* Connected to testserver.example.com (10.242.11.201) port 443

* successfully set certificate verify locations:

*   CAfile: /etc/pki/tls/certs/ca-bundle.crt

  CApath: none

* SSLv2, Client hello (1):

SSLv3, TLS handshake, Server hello (2):

SSLv3, TLS handshake, CERT (11):

SSLv3, TLS alert, Server hello (2):

error:0D0C50A1:asn1 encoding routines:ASN1_item_verify:unknown message digest algorithm

* Closing connection #0

After some searching, I found that it's caused by the current version of openssl(openssl-0.9.8e) does not support SHA256 Signature Algorithm. To resolve this, there are two ways:

1. add -k parameter to curl to ignore the SSL error

2. update openssl to "OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008", just try "yum update openssl".

# openssl version
OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008

3. upgrade openssl to at least openssl-0.9.8o. Here's the way to upgrade openssl: --- this may not be needed, try method 2 above

wget --no-check-certificate 'https://www.openssl.org/source/old/0.9.x/openssl-0.9.8o.tar.gz'
tar zxvf openssl-0.9.8o.tar.gz
cd openssl-0.9.8o
./config --prefix=/usr --openssldir=/usr/openssl
make
make test
make install

After this, run openssl version to confirm:

[root@centos-doxer openssl-0.9.8o]# /usr/bin/openssl version
OpenSSL 0.9.8o 01 Jun 2010

PS:

If you installed openssl from rpm package, then you'll find the openssl version is still the old one even after you install the new package. This is expected so don't rely too much on rpm:

[root@centos-doxer openssl-0.9.8o]# /usr/bin/openssl version
OpenSSL 0.9.8o 01 Jun 2010

Even after rebuilding rpm DB(rpm --rebuilddb), it's still the old version:

[root@centos-doxer openssl-0.9.8o]# rpm -qf /usr/bin/openssl
openssl-0.9.8e-26.el5_9.1
openssl-0.9.8e-26.el5_9.1

[root@centos-doxer openssl-0.9.8o]# rpm -qa|grep openssl
openssl-0.9.8e-26.el5_9.1
openssl-devel-0.9.8e-26.el5_9.1
openssl-0.9.8e-26.el5_9.1
openssl-devel-0.9.8e-26.el5_9.1

 

output analysis of linux last command

December 9th, 2014 No comments

Here's the output of "last|less" on my linux host:

root     pts/9        remote.example   Tue Dec  9 14:51   still logged in
testuser pts/2        :3               Tue Dec  9 14:49   still logged in
aime     pts/1        :2               Tue Dec  9 14:49   still logged in
root     pts/0        :1               Tue Dec  9 14:49   still logged in
testuser pts/13       remote.example   Tue Dec  9 10:48 - 10:52  (00:02)
reboot   system boot  2.6.23           Tue Dec  9 10:11          (04:39)
root     pts/11       10.182.120.179   Thu Dec  4 17:14 - 17:20  (00:06)
root     pts/11       10.182.120.179   Thu Dec  4 17:14 - 17:14  (00:00)
root     pts/10       10.182.120.179   Thu Dec  4 15:55 - 15:55  (00:00)
testuser pts/14       :3.0             Tue Dec  2 15:44 - 15:46  (00:01)
testuser pts/12       :3.0             Tue Dec  2 15:44 - 15:46  (00:01)
testuser pts/13       :3.0             Tue Dec  2 15:44 - 15:46  (00:01)
testuser pts/15       :3.0             Tue Dec  2 15:44 - 15:46  (00:01)
testuser pts/11       :3.0             Tue Dec  2 15:44 - 15:46  (00:01)
testuser pts/16       :3.0             Tue Dec  2 15:44 - 15:46  (00:01)
root     pts/10       10.182.120.179   Tue Dec  2 11:20 - 11:20  (00:00)
root     pts/7        10.182.120.179   Tue Dec  2 10:15 - down  (6+07:39)
root     pts/6        10.182.120.179   Tue Dec  2 10:15 - 17:55 (6+07:39)
root     pts/5        10.182.120.179   Tue Dec  2 10:15 - 17:55 (6+07:39)
root     pts/4        10.182.120.179   Tue Dec  2 10:15 - 17:55 (6+07:39)
root     pts/3        10.182.120.179   Tue Dec  2 10:15 - 17:55 (6+07:39)
root     pts/2        :1               Tue Dec  2 10:00 - down  (6+07:55)
aime     pts/1        :2               Tue Dec  2 10:00 - down  (6+07:55)
testuser pts/0        :3               Tue Dec  2 10:00 - down  (6+07:55)
reboot   system boot  2.6.23           Tue Dec  2 09:58         (6+07:56)

Here's some analysis:

  • User "reboot" is a pseudo-user for system reboot. Entries between two reboots are users who log on the system during two reboots. For info about login shells(.bash_profile) and interactive non-login shells(.bashrc), you can refer to here.
  • Here're columns meanings:

Column 1: User logged on

Column 2: The tty name after logging on

Column 3: Remote IP or hostname from which the user logged on. You can see ":1", ":2", ":3", that's vnc port number which vncserver are rendering against.

Column 4: Begin/End time of the session. If "still logged in", then means the user is still logged on; if there's value in parenthesis, then that's the total time of the logged on. For the latest "reboot"(red line 1), means the uptime till now; For the second "reboot"(red line 2), means the uptime between two reboots. Note however that this time is not always accurate, for example after system crash and unusual restart sequence. last calculates it as time between it and next reboot/shutdown.

 

Categories: IT Architecture, Linux, Systems, Unix Tags: