oracle golden gate documentation

July 28th, 2012

Here's some excerpts from oracle document about oracle golden gate:

Robust Modular Architecture

The Oracle GoldenGate software architecture is comprised of three primary components:
Capture, Trail Files, and Delivery. This modular approach allows each component to perform
its tasks independently of the others, accelerating data replication and ensuring data integrity.
Figure 1: Oracle GoldenGate leverages a component-based architecture to optimize real-time
information access and availability.

  • Capture

Oracle GoldenGate’s Capture module resides on the source database and looks for new
transactional activity. The Capture module reads the result of insert, update, and delete
operations by directly accessing the database transaction (redo) logs, and then immediately
captures new and changed data for distribution.
The Capture module only moves committed transactions—filtering out intermediate activities
and rolled-back operations—which not only reduces infrastructure load but also eliminates
potential data inconsistencies. Further optimization is achieved through transaction grouping
and optional compression features.
Oracle GoldenGate 11g can also capture messages from JMS messaging systems to deliver to
heterogeneous databases in real time for scalable and reliable data distribution.

  • Trail Files

Oracle GoldenGate’s Trail Files contain the database operations for the changed data in a
transportable, platform-independent data format. Trail Files are a critical component within
Oracle GoldenGate’s optimized queuing mechanism. They reside on the source and/or target
server but exist outside of the database to ensure heterogeneity, improved reliability, and
minimal data loss. This architecture minimizes impact to the source system because no
additional tables or queries to the database are required to support the data capture process.
The Capture module reads once, and then immediately moves the captured data to the external
Trail File for delivery to the target(s).
In the event of an outage at the source and/or target, the Trail Files contain the most-recent
data up to the point of the outage, and the data is applied once the systems are online again.

  • Delivery

Oracle GoldenGate’s Delivery module takes the changed data from the latest Trail File and
applies it to the target database using native SQL for the appropriate relational database
management system. Delivery can be made to any open database connectivity–compliant
database. The Delivery module applies each transaction in the same order as it was committed
and within the same transactional context as at the source, enabling consistency and referential
integrity at the target. To enhance IT flexibility, captured data can also be delivered to a Java
Message Service destination or as a flat file using Oracle GoldenGate Application Adapters.

For full documentation, you can refer to the following pdf file:

IT solutions I can think of now

July 26th, 2012

1.Configuration Management and Data Warehouse

     BMC ControlM, Symantec Altiris, Aperture, CMDB

     spacewalk, pulp, puppet. cfengine, chef

     Oracle Exalytics, SAP HANA
     vmware, citrix, openvz, xen, kvm, cdn(such as cloudflare), cloudstack, openstack, openshift, CloudFoundry, hadoop
     Oracle Exadata/Exalogic
3.HA & HP
    CDN: akamai, bluecoat, chinacache, level3, 365media, kontiki, cdnetworks
     Monitoring: ntop, nagios, opsview, HP OVO, Gomez, mrtg, catci, orca, scom, munin, collected<nagios-collected, collectd-unixsock, collectd-python>
                      Tivoli<websphere monitoring>, opnet<app perf test>, loadimpact
     F5 LTM, Netscaler, varnish, squid, Cisco<CSS, ACE, GSS>
     Cluster: VCS, IBM HACMP, haproxy, LifeKeeper, hp ServiceGuard/TruCluster, DRBD<Distributed Replicated Block Device>
     Webscreen, netscreen, truecrypt, NDS DRM, McAfee, forefront<windows>
     Java: IBM MQ, Azul, Jetty, tomcat, websphere, glassfish, weblogic, hudson/jenkins CI
     quova #ip geolocation
     openldap, SUN iPlanet DS
     NetApp, EMC, HP, Oracle, Qlogic, Emulex, IBM, Dell, SGI, HDS, Brocade, Emulex, SUN ZFS
Categories: IT Architecture Tags:

veritas vcs 5.1 on solaris 5.10 changes of restarting procedure

July 26th, 2012

For 5.1 VCS on solaris 10, start/stop of VCS are no longer controlled by /etc/rc*.d/S* scripts.
They are under SMF control. Plus, some of the /etc/default/gab,llt,vcs,vxfen etc.. there are lines which needs to be set to 1 if VCS is setup manually.
For example:


More interestingly with VCS one node cluster, the SMF resource for vcs is not system/vcs:default, It is system/vcs-onenode:default.

Categories: Clouding, HA, HA & HPC, IT Architecture Tags:

reset ldap proxyagent password before expiration on iplanet Directory Server

July 26th, 2012

proxyagent is the user that all hosts that bound to solaris iplanet Directory Server uses to authenticate queries against the server. If the password expires then all the clients ldap requests fail(and there's no way to set it not to expire).
The process to update the password is outlined below and only takes a few minutes to complete(applied to solaris iplanet Directory Server, but this may also help you if you use other DS like OpenLDAP etc):
1.Log on Sun Java Web Console for iplanet LDAP with system's root password.
2.Click on "Directory Service Control Center (DSCC)" under "Services" legend. Note that at some point you are prompted for a password, this is the LDAP Configuration password this time.
3.Choose the tab "Directory Servers"
4.Choose a master to work from (click on the server name)
5.Choose the tab "Entry Management"

In the DN list double click ou=profile
In the next DN list double click cn-proxyagent
In here reset the password using the same password as before (check password tool or /etc/ldap.conf on any LDAP client box)
Click ok

6.Completed, now retry LDAP access

Categories: IT Architecture, Linux, Systems Tags:

Resolved – VxVM vxconfigd ERROR V-5-1-0 Segmentation violation – core dumped

July 25th, 2012

When I tried to import veritas disk group today using vxdg -C import doxerdg, there's error message shown as the following:

VxVM vxdg ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible
return code of vxdg import command is 768

VxVM vxconfigd DEBUG V-5-1-0 IMPORT: Trying to import the disk group using configuration database copy from emc5_0490
VxVM vxconfigd ERROR V-5-1-0 Segmentation violation - core dumped

Then I used pstack to print the stack trace of the dumped file:

root # pstack /var/core/core_doxerorg_vxconfigd_0_0_1343173375_140
core 'core_doxerorg_vxconfigd_0_0_1343173375_14056' of 14056: vxconfigd
ff134658 strcmp (fefc04e8, 103fba8, 0, 0, 31313537, 31313737) + 238
001208bc da_find_diskid (103fba8, 0, 0, 0, 0, 0) + 13c
002427dc dm_get_da (58f068, 103f5f8, 0, 0, 68796573, 0) + 14c
0023f304 ssb_check_disks (58f068, 0, f37328, fffffffc, 4, 0) + 3f4
0018f8d8 dg_import_start (58f068, 9c2088, ffbfed3c, 4, 0, 0) + 25d8
00184ec0 dg_reimport (0, ffbfedf4, 0, 0, 0, 0) + 288
00189648 dg_recover_all (50000, 160d, 3ec1bc, 1, 8e67c8, 447ab4) + 2a8
001f2f5c mode_set (2, ffbff870, 0, 0, 0, 0) + b4c
001e0a80 setup_mode (2, 3e90d4, 4d5c3c, 0, 6c650000, 6c650000) + 18
001e09a0 startup (4d0da8, 0, 0, fffffffc, 0, 4d5bcc) + 3e0
001e0178 main (1, ffbffa7c, ffbffa84, 44f000, 0, 0) + 1a98
000936c8 _start (0, 0, 0, 0, 0, 0) + b8

Then I tried restart vxconfigd, but it failed as well:

root@doxer#/sbin/vxconfigd -k -x syslog

VxVM vxconfigd ERROR V-5-1-0 Segmentation violation - core dumped

After reading the man page of vxconfigd, I determined to use -r reset to reset all Veritas Volume Manager configuration information stored in the kernel as part of startup processing. But before doing this, we need umount all vxvm volumes as stated in the man page:

The reset fails if any volume devices are in use, or if an imported shared disk group exists.

After umount all vxvm partitions, then I ran the following command:

vxconfid -k -r reset

After this, the importing of DGs succeeded.

Categories: Hardware, SAN, Storage Tags: ,

resolved – aix create and remove swap space

July 14th, 2012

To add a paging space "paging0"

  • Create a new LV for paging space

mklv -t paging -y paging0 rootvg 10

  • Add the entry in /etc/swapspaces to activate the paging space during next reboot

chps -a y paging0

  • Activate the paging space

swapon /dev/paging0

To remove an active paging space "paging00"

  • Deactivate the paging space using swapoff commnad

swapoff /dev/paging00

  • remove the paging space using rmps command

rmps paging00

  • Remove the entry from /etc/swapspaces so that it is not activated during next reboot

chps -a n paging00

Categories: IT Architecture, Systems, Unix Tags: