Resolved - ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device eth0 does not seem to be present, delaying initialization.

If you met error "ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device eth0 does not seem to be present, delaying initialization" on linux, then one possibility is udev cannot detected the Ethernet device, you should follow below steps to resolve this:

  • get Mac address of eth0 from ilom

-> show /System/Networking/Ethernet_NICs/Ethernet_NIC_0

/System/Networking/Ethernet_NICs/Ethernet_NIC_0
Targets:

Properties:
health = OK
health_details = -
location = NET0 (Ethernet NIC 0)
manufacturer = INTEL
part_number = X540
serial_number = Not Available
mac_addresses = 00:10:e0:0d:b4:f0

  • configure udev rule, and change mac address according to result above

[root@test network-scripts]# cat /etc/udev/rules.d/70-persistent-net.rules
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:10:e0:0d:b4:f0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

  • configure network

[root@test network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO=static
IPV6INIT="yes"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
DOMAIN=example.com
IPADDR=192.168.20.20
NETMASK=255.255.248.0

[root@test network-scripts]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=test.example.com
GATEWAY=10.240.192.1

[root@test network-scripts]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.20.20 test.example.com test

 

resolved - net/core/dev.c:1894 skb_gso_segment+0x298/0x370()

Today on one of our servers, there were a lot of errors in /var/log/messages like below:

║Apr 14 21:50:25 test01 kernel: WARNING: at net/core/dev.c:1894
║skb_gso_segment+0x298/0x370()
║Apr 14 21:50:25 test01 kernel: Hardware name: SUN FIRE X4170 M3
║Apr 14 21:50:25 test01 kernel: : caps=(0x60014803, 0x0) len=255
║data_len=215 ip_summed=1
║Apr 14 21:50:25 test01 kernel: Modules linked in: dm_nfs nfs fscache
║auth_rpcgss nfs_acl xen_blkback xen_netback xen_gntdev xen_evtchn lockd
║ @ sunrpc 8021q garp bridge stp llc bonding be2iscsi iscsi_boot_sysfs ib_iser
║ @ rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp bnx2i cnic uio
║ @ ipv6 cxgb3i libcxgbi cxgb3 mdio libiscsi_tcp dm_round_robin libiscsi
║ @ dm_multipath scsi_transport_iscsi xenfs xen_privcmd dm_mirror video sbs sbshc
║acpi_memhotplug acpi_ipmi ipmi_msghandler parport_pc lp parport sr_mod cdrom
║ixgbe hwmon dca snd_seq_dummy snd_seq_oss snd_seq_midi_event snd_seq
║snd_seq_device snd_pcm_oss snd_mixer_oss snd_pcm snd_timer snd soundcore
║snd_page_alloc iTCO_wdt iTCO_vendor_support pcspkr ghes i2c_i801 hed i2c_core
║dm_region_hash dm_log dm_mod usb_storage ahci libahci sg shpchp megaraid_sas
║sd_mod crc_t10dif ext3 jbd mbcache
║Apr 14 21:50:25 test01 kernel: Pid: 0, comm: swapper Tainted: G W
║ 2.6.39-400.264.4.el5uek #1
║Apr 14 21:50:25 test01 kernel: Call Trace:
║Apr 14 21:50:25 test01 kernel: <IRQ> [<ffffffff8143dab8>] ?
║skb_gso_segment+0x298/0x370
║Apr 14 21:50:25 test01 kernel: [<ffffffff8106f300>]
║warn_slowpath_common+0x90/0xc0
║Apr 14 21:50:25 test01 kernel: [<ffffffff8106f42e>]
║warn_slowpath_fmt+0x6e/0x70
║Apr 14 21:50:25 test01 kernel: [<ffffffff810d73a7>] ?
║irq_to_desc+0x17/0x20
║Apr 14 21:50:25 test01 kernel: [<ffffffff812faf0c>] ?
║notify_remote_via_irq+0x2c/0x40
║Apr 14 21:50:25 test01 kernel: [<ffffffff8100a820>] ?
║xen_clocksource_read+0x20/0x30
║Apr 14 21:50:25 test01 kernel: [<ffffffff812faf4c>] ?
║xen_send_IPI_one+0x2c/0x40
║Apr 14 21:50:25 test01 kernel: [<ffffffff81011f10>] ?
║xen_smp_send_reschedule+0x10/0x20
║Apr 14 21:50:25 test01 kernel: [<ffffffff81056e0b>] ?
║ttwu_queue_remote+0x4b/0x60
║Apr 14 21:50:25 test01 kernel: [<ffffffff81509a7e>] ?
║_raw_spin_unlock_irqrestore+0x1e/0x30
║Apr 14 21:50:25 test01 kernel: [<ffffffff8143dab8>]
║skb_gso_segment+0x298/0x370
║Apr 14 21:50:25 test01 kernel: [<ffffffff8143dba6>]
║dev_gso_segment+0x16/0x50
║Apr 14 21:50:25 test01 kernel: [<ffffffff8143dfb5>]
║dev_hard_start_xmit+0x3d5/0x530
║Apr 14 21:50:25 test01 kernel: [<ffffffff8145a074>]
║sch_direct_xmit+0xc4/0x1d0
║Apr 14 21:50:25 test01 kernel: [<ffffffff8143e811>]
║dev_queue_xmit+0x161/0x410
║Apr 14 21:50:25 test01 kernel: [<ffffffff815099de>] ?
║_raw_spin_lock+0xe/0x20
║Apr 14 21:50:25 test01 kernel: [<ffffffffa045820c>]
║br_dev_queue_push_xmit+0x6c/0xa0 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffff81076e77>] ?
║local_bh_enable+0x27/0xa0
║Apr 14 21:50:25 test01 kernel: [<ffffffffa045e7ba>]
║br_nf_dev_queue_xmit+0x2a/0x90 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffffa045f668>]
║br_nf_post_routing+0x1f8/0x2e0 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffff81467428>]
║nf_iterate+0x78/0x90
║Apr 14 21:50:25 test01 kernel: [<ffffffff8146777c>]
║nf_hook_slow+0x7c/0x130
║Apr 14 21:50:25 test01 kernel: [<ffffffffa04581a0>] ?
║br_forward_finish+0x70/0x70 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffffa04581a0>] ?
║br_forward_finish+0x70/0x70 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffffa0458130>] ?
║br_flood_deliver+0x20/0x20 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffffa0458186>]
║br_forward_finish+0x56/0x70 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffffa045eba4>]
║br_nf_forward_finish+0xb4/0x180 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffffa045f36f>]
║br_nf_forward_ip+0x26f/0x370 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffff81467428>]
║nf_iterate+0x78/0x90
║Apr 14 21:50:25 test01 kernel: [<ffffffff8146777c>]
║nf_hook_slow+0x7c/0x130
║Apr 14 21:50:25 test01 kernel: [<ffffffffa0458130>] ?
║br_flood_deliver+0x20/0x20 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffff81467428>] ?
║nf_iterate+0x78/0x90
║Apr 14 21:50:25 test01 kernel: [<ffffffffa0458130>] ?
║br_flood_deliver+0x20/0x20 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffffa04582c8>]
║__br_forward+0x88/0xc0 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffffa0458356>]
║br_forward+0x56/0x60 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffffa04591fc>]
║br_handle_frame_finish+0x1ac/0x240 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffffa045ee1b>]
║br_nf_pre_routing_finish+0x1ab/0x350 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffff8115bfe9>] ?
║kmem_cache_alloc_trace+0xc9/0x1a0
║Apr 14 21:50:25 test01 kernel: [<ffffffffa045fc55>]
║br_nf_pre_routing+0x305/0x370 [bridge]
║Apr 14 21:50:25 test01 kernel: [<ffffffff8100122a>] ?
║xen_hypercall_xen_version+0xa/0x20
║Apr 14 21:50:25 test01 kernel: [<ffffffff81467428>]
║nf_iterate+0x78/0x90
║Apr 14 21:50:25 test01 kernel: [<ffffffff8146777c>]
║nf_hook_slow+0x7c/0x130

To fix this, we should disable LRO(large receive offload) first:

for i in eth0 eth1 eth2 eth3;do /sbin/ethtool -K $i lro off;done

And if the NICs are of Intel 10G, the we should disable GRO(generic receive offload) too:

for i in eth0 eth1 eth2 eth3;do /sbin/ethtool -K $i gro off;done

Here's the command to disable both of LRO/GRO:

for i in eth0 eth1 eth2 eth3;do /sbin/ethtool -K $i gro off;/sbin/ethtool -K $i lro off;done

 

resolved - yum Error performing checksum Trying other mirror and finally No more mirrors to try

Today when I was installing one package in Linux, below error prompted:

[root@testhost yum.repos.d]# yum list --disablerepo=* --enablerepo=yumpaas
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
yumpaas | 2.9 kB 00:00
yumpaas/primary_db | 30 kB 00:00
http://yumrepo.example.com/paas_oel5/repodata/b8e385ebfdd7bed69b7619e63cd82475c8bacc529db7b8c145609b64646d918a-primary.sqlite.bz2: [Errno -3] Error performing checksum
Trying other mirror.
yumpaas/primary_db | 30 kB 00:00
http://yumrepo.example.com/paas_oel5/repodata/b8e385ebfdd7bed69b7619e63cd82475c8bacc529db7b8c145609b64646d918a-primary.sqlite.bz2: [Errno -3] Error performing checksum
Trying other mirror.
Error: failure: repodata/b8e385ebfdd7bed69b7619e63cd82475c8bacc529db7b8c145609b64646d918a-primary.sqlite.bz2 from yumpaas: [Errno 256] No more mirrors to try.

The repo "yumpaas" is hosted on OEL 6 VM, which by default use sha2 for checksum. However, for OEL5 VMs(the VM running yum), yum uses sha1 by default. So I've WA this by install python-hashlib to extend yum's capability to handle sha2(python-hashlib from external repo EPEL).

[root@testhost yum.repos.d]# yum install python-hashlib

And after this, the problematic repo can be used. But to resolve this issue permanently without WA on OEL5 VMs, we should recreate the repo with sha1 for checksum algorithm(createrepo -s sha1).

resolved - Unable to locally verify the issuer's authority

Today we found below error in weblogic server log:

[2015-08-16 14:38:21,866] [WARN] [N/A] [testhost.example.com] [Thrown exception when ...... javax.net.ssl.SSLHandshakeException: General SSLEngine problem
......
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
......
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:323)

And we had a test on the linux server:

[root@testhost1 ~]# wget https://testurl.example.com/
--2015-08-17 10:15:56-- https://testurl.example.com/
Resolving testurl.example.com... 192.168.77.230
Connecting to testurl.example.com|192.168.77.230|:443... connected.
ERROR: cannot verify testurl.example.com's certificate, issued by `/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4':
Unable to locally verify the issuer's authority.
To connect to testurl.example.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.

And tested using openssl s_client:

[root@testhost ~]# openssl s_client -connect testurl.example.com:443 -showcerts
CONNECTED(00000003)
depth=1 /C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
verify error:num=2:unable to get issuer certificate
issuer= /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
verify return:0
---
Certificate chain
0 s:/C=US/ST=California/L=Redwood Shores/O=Test Corporation/OU=FOR TESTING PURPOSES ONLY/CN=*.testurl.example.com
i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
-----BEGIN CERTIFICATE-----
MIIFOTCCBCGgAwIBAgIQB1z+1jMRPPv/TrBA2qh5YzANBgkqhkiG9w0BAQsFADB+
MQswCQYDVQQGEwJVUzEdMBsGA
......
-----END CERTIFICATE-----
---
Server certificate
subject=/C=US/ST=California/L=Redwood Shores/O=Test Corporation/OU=FOR TESTING PURPOSES ONLY/CN=*.testurl.example.com
issuer=/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
---
No client certificate CA names sent
---
SSL handshake has read 1510 bytes and written 447 bytes
---
New, TLSv1/SSLv3, Cipher is AES128-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1
Cipher : AES128-SHA
Session-ID: 2B5E228E4705C12D7EE7B5C608AE5E3781C5626C1A99A8D60D21DB2350D4F4DE
Session-ID-ctx:
Master-Key: F06632FD67E22EFB1134CDC0EEE8B8B44972747D804D5C7969095FCB2A692E90F111DC4FC6081F4D94C7561BB556DA16
Key-Arg : None
Krb5 Principal: None
Start Time: 1439793731
Timeout : 300 (sec)
Verify return code: 2 (unable to get issuer certificate)
---

From above, the issue should be caused by CA file missing, as the SSL certificate is relatively new using sha2. So I determined to download CA file from symantec, both "Symantec Class 3 Secure Server CA - G4" and "VeriSign Class 3 Public Primary Certification Authority - G5".

[root@testhost ~]# wget --no-check-certificate 'http://symantec.tbs-certificats.com/SymantecSSG4.crt'

[root@testhost ~]# openssl version -a|grep OPENSSLDIR

OPENSSLDIR: "/etc/pki/tls"

[root@testhost ~]# cp /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak

[root@testhost ~]# openssl x509 -text -in "SymantecSSG4.crt" >> /etc/pki/tls/certs/ca-bundle.crt

And test wget again, but still the same error:

[root@testhost1 ~]# wget https://testurl.example.com/
--2015-08-17 10:38:59-- https://testurl.example.com/
Resolving myproxy.example.com... 192.168.19.20
Connecting to myproxy.example.com|192.168.19.20|:80... connected.
ERROR: cannot verify testurl.example.com's certificate, issued by `/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4':
unable to get issuer certificate
To connect to testurl.example.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.

Then I determined to install another CA indicated in the error log(G4 above may not be needed):

[root@testhost1 ~]# wget --no-check-certificate 'https://www.symantec.com/content/en/us/enterprise/verisign/roots/VeriSign-Class%203-Public-Primary-Certification-Authority-G5.pem'

[root@testhost1 ~]# openssl x509 -text -in "VeriSign-Class 3-Public-Primary-Certification-Authority-G5.pem" >> /etc/pki/tls/certs/ca-bundle.crt

This time, wget passed https:

[root@testhost1 ~]# wget https://testurl.example.com/
--2015-08-17 10:40:03-- https://testurl.example.com/
Resolving myproxy.example.com... 192.168.19.20
Connecting to myproxy.example.com|192.168.19.20|:80... connected.
Proxy request sent, awaiting response... 200 OK
Length: 11028 (11K) [text/html]
Saving to: `index.html'

100%[======================================>] 11,028 --.-K/s in 0.02s

2015-08-17 10:40:03 (526 KB/s) - `index.html' saved [11028/11028]

PS:
1. You can check more info here about adding trusted root certificates to the server - Mac OS X / Windows / Linux (Ubuntu, Debian) / Linux (CentOs 6) / Linux (CentOs 5)

2. To use private CA/public cert/private key, you should use below in curl:

curl -v --cacert your-root-ca.crt --cert your-public-cert.crt --key your-private.key--pass mypass -u "username:password" https://url
In this command,your-public-cert.crt is the public cert that you have trusted,your-private.key is the private RSA key portion of the cert that is used to sign the request, and “username:password” should be replaced with the correct username and password.

Also, if you’re using an intermediate cert, you can provide it in one command like so:

curl -v --cacert your-root-ca.crt --cert <(cat your-public-cert.crt  intermediate.crt ) --key your-private.key--pass mypass -u “username:password" https://url

change NIC configuration to make new VLAN tag take effect

Sometimes you may want to add vlan tag to existing NIC, and after the addition, you'll need to change DNS names bound to the old tag with new IPs in the newly added vlan tag. After all these two steps done, you'll need to make changes on the hosts(take linux for example) to make these changes into effect.

In this example, I'm going to move the old v118_FE to the new VLAN v117_FE.

ifconfig v118_FE down
ifconfig bond0.118 down
cd /etc/sysconfig/network-scripts
mv ifcfg-bond0.118 ifcfg-bond0.117
vi ifcfg-bond0.117
    DEVICE=bond0.117
    BOOTPROTO=none
    USERCTL=no
    ONBOOT=yes
    BRIDGE=v117_FE
    VLAN=yes
mv ifcfg-v118_FE ifcfg-v117_FE
vi ifcfg-v117_FE
    DEVICE=v117_FE
    BOOTPROTO=none
    USERCTL=no
    ONBOOT=yes
    STP=off
    TYPE=Bridge
    IPADDR=10.119.236.13
    NETMASK=255.255.248.0
    NETWORK=10.119.232.0
    BROADCAST=10.119.239.255
ifup v117_FE
ifup bond0.117
reboot

resolved - VPN Service not available, The VPN agent service is not responding. Please restart this application after a while.

Today when I tried to connect to VPN through Cisco AnyConnect Secure Mobility Client, the following error dialog prompted:

 

VPN Service not available
VPN Service not available

And after I clicked "OK" button, the following dialog prompted:

The VPN agent service is not responding
The VPN agent service is not responding

So all of the two dialogs were complaining about "VPN service" not available/not responding. So I ran "services.msc" in windows run and found below:

vpn service
vpn service

When I checked, the service "Cisco AnyConnect Secure Mobility Agent" was stopped, and the "Startup type" was "Manual". So I changed "Startup type" to "Automatic", click "Start", then "OK" to save.

After this, Cisco AnyConnect Secure Mobility Client was running ok and I can connect through it to VPN.

TCP Window Scaling - values about TCP buffer size

TCP Window Scaling(TCP socket buffer size, TCP window size)

/proc/sys/net/ipv4/tcp_window_scaling #1 is to enable window scaling
/proc/sys/net/ipv4/tcp_rmem - memory reserved for TCP rcv buffers. minimum, initial and maximum buffer size
/proc/sys/net/ipv4/tcp_wmem - memory reserved for TCP send buffers
/proc/sys/net/core/rmem_max - maximum receive window
/proc/sys/net/core/wmem_max - maximum send window

The following values (which are the defaults for 2.6.17 with more than 1 GByte of memory) would be reasonable for all paths with a 4MB BDP or smaller:

echo 1 > /proc/sys/net/ipv4/tcp_moderate_rcvbuf #autotuning enabled. The receiver buffer size (and TCP window size) is dynamically updated (autotuned) for each connection. (Sender side autotuning has been present and unconditionally enabled for many years now).
echo 108544 > /proc/sys/net/core/wmem_max
echo 108544 > /proc/sys/net/core/rmem_max
echo "4096 87380 4194304" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 16384 4194304" > /proc/sys/net/ipv4/tcp_wmem

Advanced TCP features

cat /proc/sys/net/ipv4/tcp_timestamps #more is here(allow more accurate RTT measurements for deriving the retransmission timeout estimator; protect against old segments from the previous incarnations of the TCP connection; allow detection of unnecessary retransmissions. But enabling it will also allow you to guess the uptime of a target system.)
cat /proc/sys/net/ipv4/tcp_sack

Here are some background knowledge:

  • The throughput of a communication is limited by two windows: the congestion window and the receive window(TCP congestion window is maintained by the sender, and TCP window size is maintained by the receiver). The former tries not to exceed the capacity of the network (congestion control) and the latter tries not to exceed the capacity of the receiver to process data (flow control). The receiver may be overwhelmed by data if for example it is very busy (such as a Web server). Each TCP segment contains the current value of the receive window. If for example a sender receives an ack which acknowledges byte 4000 and specifies a receive window of 10000 (bytes), the sender will not send packets after byte 14000, even if the congestion window allows it.
  • TCP uses what is called the "congestion window", or CWND, to determine how many packets can be sent at one time. The larger the congestion window size, the higher the throughput. The TCP "slow start" and "congestion avoidance" algorithms determine the size of the congestion window. The maximum congestion window is related to the amount of buffer space that the kernel allocates for each socket. For each socket, there is a default value for the buffer size, which can be changed by the program using a system library call just before opening the socket. There is also a kernel enforced maximum buffer size. The buffer size can be adjusted for both the send and receive ends of the socket.
  • To get maximal throughput it is critical to use optimal TCP send and receive socket buffer sizes for the link you are using. If the buffers are too small, the TCP congestion window will never fully open up. If the receiver buffers are too large, TCP flow control breaks and the sender can overrun the receiver, which will cause the TCP window to shut down. This is likely to happen if the sending host is faster than the receiving host. Overly large windows on the sending side is not usually a problem as long as you have excess memory; note that every TCP socket has the potential to request this amount of memory even for short connections, making it easy to exhaust system resources.
  • More about TCP Buffer Sizing is here.
  • More about /proc/sys/net/ipv4/* Variables is here.

resolved - openssl error:0D0C50A1:asn1 encoding routines:ASN1_item_verify:unknown message digest algorithm

Today when I tried using curl to get url info, error occurred like below:

[root@centos-doxer ~]# curl -i --user username:password -H "Content-Type: application/json" -X POST --data @/u01/shared/addcredential.json https://testserver.example.com/actions -v

* About to connect() to testserver.example.com port 443

*   Trying 10.242.11.201... connected

* Connected to testserver.example.com (10.242.11.201) port 443

* successfully set certificate verify locations:

*   CAfile: /etc/pki/tls/certs/ca-bundle.crt

  CApath: none

* SSLv2, Client hello (1):

SSLv3, TLS handshake, Server hello (2):

SSLv3, TLS handshake, CERT (11):

SSLv3, TLS alert, Server hello (2):

error:0D0C50A1:asn1 encoding routines:ASN1_item_verify:unknown message digest algorithm

* Closing connection #0

After some searching, I found that it's caused by the current version of openssl(openssl-0.9.8e) does not support SHA256 Signature Algorithm. To resolve this, there are two ways:

1. add -k parameter to curl to ignore the SSL error

2. update openssl to "OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008", just try "yum update openssl".

# openssl version
OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008

3. upgrade openssl to at least openssl-0.9.8o. Here's the way to upgrade openssl: --- this may not be needed, try method 2 above

wget --no-check-certificate 'https://www.openssl.org/source/old/0.9.x/openssl-0.9.8o.tar.gz'
tar zxvf openssl-0.9.8o.tar.gz
cd openssl-0.9.8o
./config --prefix=/usr --openssldir=/usr/openssl
make
make test
make install

After this, run openssl version to confirm:

[root@centos-doxer openssl-0.9.8o]# /usr/bin/openssl version
OpenSSL 0.9.8o 01 Jun 2010

PS:

  • If you installed openssl from rpm package, then you'll find the openssl version is still the old one even after you install the new package. This is expected so don't rely too much on rpm:

[root@centos-doxer openssl-0.9.8o]# /usr/bin/openssl version
OpenSSL 0.9.8o 01 Jun 2010

Even after rebuilding rpm DB(rpm --rebuilddb), it's still the old version:

[root@centos-doxer openssl-0.9.8o]# rpm -qf /usr/bin/openssl
openssl-0.9.8e-26.el5_9.1
openssl-0.9.8e-26.el5_9.1

[root@centos-doxer openssl-0.9.8o]# rpm -qa|grep openssl
openssl-0.9.8e-26.el5_9.1
openssl-devel-0.9.8e-26.el5_9.1
openssl-0.9.8e-26.el5_9.1
openssl-devel-0.9.8e-26.el5_9.1

  • If you met error "Unknown SSL protocol error in connection to xxx" (curl) or "write:errno=104" (openssl) on OEL5, then it's due to curl/openssl comes with OEL5 does not support TLSv1.2 (you can run "curl --help|grep -i tlsv1.2" to confirm). You need manually compile & install curl/openssl to fix this. More info here. (you can softlink /usr/local/bin/curl to /usr/bin/curl, but can leave openssl as it is)
    • OpenSSL to /usr/local/ssl (won't overwirte default openssl)
      • wget --no-check-certificate 'https://www.openssl.org/source/openssl-1.0.2l.tar.gz'
        tar zxvf openssl-1.0.2l.tar.gz
        cd openssl-1.0.2l
        export PATH=/usr/bin:/u01/Allshared/JAVA7/jdk1.7.0_80/bin:/usr/sbin:/usr/bin:/usr/sbin:/usr/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin #to avoid /usr/local readonly error (autofs,perl "/usr/local/bin/perl5: error while loading shared libraries: libdb.so.3: cannot open shared object file: No such file or directory", then export PATH to exclude /usr/local/bin/)
        ./config -fpic shared && make && make install
        echo "/usr/local/ssl/lib" >> /etc/ld.so.conf
        ldconfig
        ls -l /usr/bin/openssl*
        openssl version
      • You will need specify openssl lib path if you compile other packages (such as python):
        • ./configure --prefix=/usr/local/ --enable-shared LDFLAGS="-L/usr/local/ssl/lib" #if you are using non-standard Lib path
  • Curl to /usr/local/curl/bin/curl
    • wget --no-check-certificate 'http://curl.haxx.se/download/curl-7.53.1.tar.gz'
      tar zxvf curl-7.53.1.tar.gz
      cd curl-7.53.1.tar.gz
      ./configure --prefix=/usr/local/curl --with-ssl=/usr/local/ssl LDFLAGS="-L/usr/local/ssl/lib" --disable-ldap
      ls -l /usr/bin/curl*
      mv /usr/bin/curl /usr/bin/curl.old; ln -s /usr/local/curl/bin/curl /usr/bin/curl
      curl -V
  • You will need specify openssl lib path if you compile other packages:
    • ./configure --prefix=/usr/local/ --enable-shared LDFLAGS="-L/usr/local/ssl/lib" #if you are using non-standard Lib path

resolved - high value of RX overruns in ifconfig

Today we tried ssh to one host, it stuck soon there. And we observed that RX overruns in ifconfig output was high:

[root@test /]# ifconfig bond0
bond0 Link encap:Ethernet HWaddr 00:10:E0:0D:AD:5E
inet6 addr: fe80::210:e0ff:fe0d:ad5e/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:140234052 errors:0 dropped:0 overruns:12665 frame:0
TX packets:47259596 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:34204561358 (31.8 GiB) TX bytes:21380246716 (19.9 GiB)

Receiver overruns usually occur when packets come in faster than the kernel can service the last interrupt. But in our case, we are seeing increasing inbound errors on interface Eth105/1/7 of device bcd-c1z1-swi-5k07a/b, did shut and no shut but no change. And after some more debugging, we found one bad SFP/cable(uplink port, e.g. WAN port, more info here). After replaced that, the server came back to normal.

example-swi-5k07a# sh int Eth105/1/7 | i err
4065 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 output error 0 collision 0 deferred 0 late collision

example-swi-5k07a# sh int Eth105/1/7 | i err
4099 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 output error 0 collision 0 deferred 0 late collision

example-swi-5k07a# sh int Eth105/1/7 counters errors

--------------------------------------------------------------------------------
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
--------------------------------------------------------------------------------
Eth105/1/7 3740 483 0 4223 0 0

--------------------------------------------------------------------------------
Port Single-Col Multi-Col Late-Col Exces-Col Carri-Sen Runts
--------------------------------------------------------------------------------
Eth105/1/7 0 0 0 0 0 3740

--------------------------------------------------------------------------------
Port Giants SQETest-Err Deferred-Tx IntMacTx-Er IntMacRx-Er Symbol-Err
--------------------------------------------------------------------------------
Eth105/1/7 0 -- 0 0 0 0

example-swi-5k07a# sh int Eth105/1/7 counters errors

--------------------------------------------------------------------------------
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
--------------------------------------------------------------------------------
Eth105/1/7 4386 551 0 4937 0 0

--------------------------------------------------------------------------------
Port Single-Col Multi-Col Late-Col Exces-Col Carri-Sen Runts
--------------------------------------------------------------------------------
Eth105/1/7 0 0 0 0 0 4386

--------------------------------------------------------------------------------
Port Giants SQETest-Err Deferred-Tx IntMacTx-Er IntMacRx-Er Symbol-Err
--------------------------------------------------------------------------------
Eth105/1/7 0 -- 0 0 0 0

PS:

During debugging, we also found on server side, the interface eth0 is half duplex and speed at 100Mb/s:

-bash-3.2# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Half
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000003 (3)
Link detected: yes

However, it should be full duplex and 1000Mb/s. So we also changed the speed, duplex to auto auto on switch and after that, the OS side is now showing the expected value:

-bash-3.2# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000003 (3)
Link detected: yes

resolved - auditd STDERR: Error deleting rule Error sending enable request (Operation not permitted)

Today when I try to restart auditd, the following error message prompted:

[2014-09-18T19:26:41+00:00] ERROR: service[auditd] (cookbook-devops-kernelaudit::default line 14) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of /sbin/service auditd restart ----
STDOUT: Stopping auditd: [  OK  ]
Starting auditd: [FAILED]
STDERR: Error deleting rule (Operation not permitted)
Error sending enable request (Operation not permitted)
---- End output of /sbin/service auditd restart ----
Ran /sbin/service auditd restart returned 1

After some reading of manpage auditd, I realized that when audit "enabled" was set to 2(locked), any attempt to change the configuration in this mode will be audited and denied. And that maybe the reason of "STDERR: Error deleting rule (Operation not permitted)", "Error sending enable request (Operation not permitted)". Here's from man page of auditctl:

-e [0..2] Set enabled flag. When 0 is passed, this can be used to temporarily disable auditing. When 1 is passed as an argument, it will enable auditing. To lock the audit configuration so that it can't be changed, pass a 2 as the argument. Locking the configuration is intended to be the last command in audit.rules for anyone wishing this feature to be active. Any attempt to change the configuration in this mode will be audited and denied. The configuration can only be changed by rebooting the machine.

You can run auditctl -s to check the current setting:

[root@centos-doxer ~]# auditctl -s
AUDIT_STATUS: enabled=1 flag=1 pid=3154 rate_limit=0 backlog_limit=320 lost=0 backlog=0

And you can run auditctl -e <0|1|2> to change this attribute on the fly, or you can add -e <0|1|2> in /etc/audit/audit.rules. Please note after you modify this, a reboot is a must to make this into effect.

PS:

Here's more about linux audit.