differences between Server Connection Time Server Response Time Page Load Time Page Download Time

May 31st, 2012

Here's an excerpt from google analystics:

Avg. Server Connection Time (sec): 0.12 The average amount of time (in seconds) spent in establishing TCP connection for this page.

Avg. Server Response Time (sec): 0.80 The average amount of time (in seconds) your server takes to respond to a user request, including the network time from user’s location to your server.

Avg. Page Load Time (sec): 7.85 Avg. Page Load Time is the average amount of time (in seconds) it takes for pages from the sample set to load, from initiation of the pageview (e.g. click a page link) to load completion in the browser. If you see zero (0) as a value, please refer to the Site Speed article.

Avg. Page Download Time (sec): 2.08 The average amount of time (in seconds) to download this page.

For example, my site is like this:

Server Response Time


1.You can read more info in the following link for how to use google analytics site speed http://support.google.com/analytics/bin/answer.py?hl=en-us&topic=1120718&answer=1205784

2.If you want to break down your site's loading time by digging resources like js/css/html/cgi/php ones, firebug is your friend. You can refer to the following two links for how to use firebug:



Categories: IT Architecture Tags:


May 30th, 2012

Here goes some differences between SCSI ISCSI FCP FCoE FCIP NFS CIFS DAS NAS SAN(excerpt from Internet):

Most storage networks use the SCSI protocol for communication between servers and disk drive devices. A mapping layer to other protocols is used to form a network: Fibre Channel Protocol (FCP), the most prominent one, is a mapping of SCSI over Fibre Channel; Fibre Channel over Ethernet (FCoE); iSCSI, mapping of SCSI over TCP/IP.


A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. Historically, data centers first created "islands" of SCSI disk arrays as direct-attached storage (DAS), each dedicated to an application, and visible as a number of "virtual hard drives" (i.e. LUNs). Operating systems maintain their own file systems on their own dedicated, non-shared LUNs, as though they were local to themselves. If multiple systems were simply to attempt to share a LUN, these would interfere with each other and quickly corrupt the data. Any planned sharing of data on different computers within a LUN requires advanced solutions, such as SAN file systems or clustered computing. Despite such issues, SANs help to increase storage capacity utilization, since multiple servers consolidate their private storage space onto the disk arrays.Sharing storage usually simplifies storage administration and adds flexibility since cables and storage devices do not have to be physically moved to shift storage from one server to another. SANs also tend to enable more effective disaster recovery processes. A SAN could span a distant location containing a secondary storage array. This enables storage replication either implemented by disk array controllers, by server software, or by specialized SAN devices. Since IP WANs are often the least costly method of long-distance transport, the Fibre Channel over IP (FCIP) and iSCSI protocols have been developed to allow SAN extension over IP networks. The traditional physical SCSI layer could only support a few meters of distance - not nearly enough to ensure business continuance in a disaster.

More about FCIP is here http://en.wikipedia.org/wiki/Fibre_Channel_over_IP (still use FC protocol)

A competing technology to FCIP is known as iFCP. It uses routing instead of tunneling to enable connectivity of Fibre Channel networks over IP.

IP SAN uses TCP as a transport mechanism for storage over Ethernet, and iSCSI encapsulates SCSI commands into TCP packets, thus enabling the transport of I/O block data over IP networks.

Network-attached storage (NAS), in contrast to SAN, uses file-based protocols such as NFS or SMB/CIFS where it is clear that the storage is remote, and computers request a portion of an abstract file rather than a disk block. The key difference between direct-attached storage (DAS) and NAS is that DAS is simply an extension to an existing server and is not necessarily networked. NAS is designed as an easy and self-contained solution for sharing files over the network.


FCoE works with standard Ethernet cards, cables and switches to handle Fibre Channel traffic at the data link layer, using Ethernet frames to encapsulate, route, and transport FC frames across an Ethernet network from one switch with Fibre Channel ports and attached devices to another, similarly equipped switch.


When an end user or application sends a request, the operating system generates the appropriate SCSI commands and data request, which then go through encapsulation and, if necessary, encryption procedures. A packet header is added before the resulting IP packets are transmitted over an Ethernet connection. When a packet is received, it is decrypted (if it was encrypted before transmission), and disassembled, separating the SCSI commands and request. The SCSI commands are sent on to the SCSI controller, and from there to the SCSI storage device. Because iSCSI is bi-directional, the protocol can also be used to return data in response to the original request.


Fibre channel is more flexible; devices can be as far as ten kilometers (about six miles) apart if optical fiber is used as the physical medium. Optical fiber is not required for shorter distances, however, because Fibre Channel also works using coaxial cable and ordinary telephone twisted pair.


Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984,[1] allowing a user on a client computer to access files over a network in a manner similar to how local storage is accessed. On the contrary, CIFS is its Windows-based counterpart used in file sharing.

Categories: Hardware, NAS, SAN, Storage Tags:

impact of restart vxconfigd on solaris and linux – VxVM Configuration Daemon

May 30th, 2012

stop and restart the VxVM Configuration Daemon, vxconfigd may cause your VxVA, VMSA and/or VEA session to exit. This may also cause a momentary stoppage of any VxVM configuration actions. This should not harm any data; however, it may cause some configuration operations (e.g. moving subdisks, plex resynchronization) to abort unexpectedly. Any VxVM configuration changes should be completed before running this section.

If you are using EMC PowerPath devices with Veritas Volume Manager, you must run the EMC command(s) 'powervxvm setup' (or 'safevxvm setup') and/or 'powervxvm online' (or 'safevxvm online') if this script terminates abnormally. Also, if VCS service groups are running on the host, restarting vxconfigd may cause failover to occur. So you'd better freeze service groups before doing this. You can refer to the following for details: http://www.doxer.org/differences-between-freezing-vcs-system-and-freezing-service-group/

Categories: Clouding, HA, HA & HPC, IT Architecture Tags:

check lun0 is the first mapped LUN before rescan-scsi-bus.sh(sg3_utils) on centos linux

May 26th, 2012

rescan-scsi-bus.sh from package sg3_utils scans all the SCSI buses on the system, updating the SCSI layer to reflect new devices on the bus. But in order for this to work, LUN0 must be the first mapped logical unit. Here's some excerpt from wiki page:

LUN 0: There is one LUN which is required to exist in every target: zero. The logical unit with LUN zero is special in that it must implement a few specific commands, most notably Report LUNs, which is how an initiator can find out all the other LUNs in the target. But LUN zero need not provide any other services, such as a storage volume.

To confirm LUN0 is the first mapped LUN, do the following check if you're using symantec storage foundation:

syminq -pdevfile |awk '!/^#/ {print $1,$4,$5}' |sort -n | uniq | while read _sym _FA _port
if [[ -z "$(symcfg -sid $_sym -fa $_FA -p $_port -addr list | awk '$NF=="000"')" ]]
print Sym $_sym, FA $_FA:$_port

If you see the following line, then it proves that lun0 is the first mapped LUN, and you can continue with the script rescan-scsi-bus.sh to scan new lun:

Symmetrix ID: 000287890217

Director Device Name Attr Address
---------------------- ----------------------------- ---- --------------
Ident Symbolic Port Sym Physical VBUS TID LUN
------ -------- ---- ---- ----------------------- ---- --- ---

FA-4A 04A 0 0000 c1t600604844A56CA43d0s* VCM 0 00 000


For more infomation what Logical Unit Number(LUN) is, you may refer to:


Categories: Hardware, SAN, Storage Tags:

solaris format disk label Changing a disk label (EFI / SMI)

May 24th, 2012

I had inserted a drive into a V440 and after running devfsadm, I ran format on the disk. I was presented with the following partition table:

partition> p
Current partition table (original):
Total disk sectors available: 143358320 + 16384 (reserved sectors)

Part Tag Flag First Sector Size Last Sector
0 usr wm 34 68.36GB 143358320
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 143358321 8.00MB 143374704

This disk was used in a zfs pool and, as a result, uses an EFI label. The more familiar label that is used is an SMI label (8 slices; numbered 0-7 with slice 2 being the whole disk). The advantage of the EFI label is that it supports LUNs over 1TB in size and prevents overlapping partitions by providing a whole-disk device called cxtydz rather than using cxtydzs2.

However, I want to use this disk for UFS partitions. This means I need to get it back the SMI label for the device. Here’s how it’s done:

# format -e
partition> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label. Changing to SMI label will erase all
current partitions.
Continue? y
Auto configuration via format.dat[no]?
Auto configuration via generic SCSI-2[no]?
partition> q
format> q

Running format again will show that the SMI label was placed back onto the disk:

partition> p
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 25 129.19MB (26/0/0) 264576
1 swap wu 26 - 51 129.19MB (26/0/0) 264576
2 backup wu 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 52 - 14086 68.10GB (14035/0/0) 142820160
7 unassigned wm 0 0 (0/0/0) 0


  1. Keep in mind that changing disk labels will destroy any data on the disk.
  2. Here's more info about EFI & SMI disk label -  http://docs.oracle.com/cd/E19082-01/819-2723/disksconcepts-14/index.html
  3. More on UEFI and BIOS - http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
Categories: Hardware, Storage Tags:

modify sudoers_debug in ldap.conf to debug sudo on linux and solaris

May 22nd, 2012

If you encountered some problem when doing sudoCommand, you will be happy if there's debug info showed in console. To show detailed debug info when doing sudo, modify /etc/ldap.conf(for both solaris ldap and linux ldap):

# verbose sudoers matching from ldap
sudoers_debug 2

sudoers_debug setting to 1 will show moderate debugging, setting to 2 will show the results of the matches themselves. For example, if you have set sudoers_debug to 2 and when you execute sudoCommand, info you'll get will like the following:

$ sudo -i
LDAP Config Summary
uri ldaps://testLdapServer/
ldap_version 3
sudoers_base ou=SUDOers,dc=doxer,dc=org
binddn cn=proxyAgent,ou=profile,dc=doxer,dc=org
bindpw password
bind_timelimit 120000
timelimit 120
ssl on
tls_cacertdir /etc/openldap/cacerts
sudo: ldap_initialize(ld, ldaps://testLdapServer/)
sudo: ldap_set_option: debug -> 0
sudo: ldap_set_option: ldap_version -> 3
sudo: ldap_set_option: tls_cacertdir -> /etc/openldap/cacerts
sudo: ldap_set_option: timelimit -> 120
sudo: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT, 120)
sudo: ldap_set_option(LDAP_OPT_X_TLS, LDAP_OPT_X_TLS_HARD)
sudo: ldap_simple_bind_s() ok
sudo: found:cn=defaults,ou=SUDOers,dc=doxer,dc=org
sudo: ldap sudoOption: 'ignore_local_sudoers'
sudo: ldap search '(|(sudoUser=liandy)(sudoUser=%linuxsupport)(sudoUser=%linux)(sudoUser=ALL))'
sudo: found:cn=LDAPpwchange,ou=sudoers,dc=doxer,dc=org
sudo: ldap sudoHost 'server01' ... not
sudo: ldap sudoHost 'server02' ... not
sudo: ldap search 'sudoUser=+*'
sudo: found:cn=test-su,ou=SUDOers,dc=doxer,dc=org
sudo: ldap sudoUser netgroup '+sysadmin-ng' ... not
sudo: found:cn=dba-su,ou=SUDOers,dc=doxer,dc=org
sudo: ldap sudoUser netgroup '+dba-ng' ... not
sudo: ldap sudoUser netgroup 'test01' ... not
sudo: ldap sudoUser netgroup 'test02' ... not
sudo: found:cn=Linux-Team-root,ou=SUDOers,dc=doxer,dc=org
sudo: ldap sudoUser netgroup '+linuxadmins' ... MATCH!
sudo: ldap sudoHost 'ALL' ... MATCH!
sudo: ldap sudoorgmand 'ALL' ... MATCH!
sudo: Perfect Matched!
sudo: ldap sudoOption: '!authenticate'
sudo: user_matches=-1
sudo: host_matches=-1
sudo: sudo_ldap_check(0)=0x422

So from above debugging outputs, you'll know that the account to be sudo authenticated belongs to linuxadmins netgroup and this netgroup is in the sudoUser's scope of Linux-Team-root SUDOers. As Linux-Team-root has sudoCommand for "ALL" and sudoHost for "ALL" and also has sudoOption "!authenticate", then the user will successfully get root access with no password prompt.

Now let's go through a failed authentication to see the debugging information:

$ sudo hastatus -sum
LDAP Config Summary
host testLdapServer
port 389
ldap_version 3
sudoers_base ou=SUDOers,dc=doxer,dc=org
binddn (anonymous)
bindpw (anonymous)
ldap_bind() ok
ldap sudoOption: 'ignore_local_sudoers'
ldap search '(|(sudoUser=liandy)(sudoUser=%normaluser)(sudoUser=%normaluser)(sudoUser=%patop)(sudoUser=ALL))'
ldap search 'sudoUser=+*'
ldap sudoUser netgroup '+sysadmin-ng' ... not
ldap sudoUser netgroup '+linux-team-ng' ... not
ldap sudoUser netgroup '+normaluser-ng' ... MATCH!
ldap sudoHost 'all' ... MATCH!
ldap sudoCommand '/opt/OV/bin/OpC/opcagt -start' ... not
ldap sudoCommand '/opt/OV/bin/OpC/opcagt -status' ... not
ldap sudoCommand '/opt/OV/bin/OpC/opcagt -stop' ... not
ldap sudoCommand '/opt/OV/bin/OpC/opcagt -kill' ... not

From here we can see that although the user to be authenticated is in "normal-su" SUDOers, and the host is in it's sudoHost, but as there's no "hastatus -sum" defined for sudoCommand, so at last the authentication failed(user_matches=-1, host_matches=-1) and prompts for sudo password.