Archive

Archive for the ‘Databases’ Category

Document databases and Graph databases

March 27th, 2014 No comments
  • Document databases

Document databases are not document management systems. More often than not, developers starting out with NoSQL confuse document databases with document and content management systems. The worddocument in document databases connotes loosely structured sets of key/value pairs in documents, typically JSON (JavaScript Object Notation), and not documents or spreadsheets (though these could be stored too).

Document databases treat a document as a whole and avoid splitting a document into its constituent name/value pairs. At a collection level, this allows for putting together a diverse set of documents into a single collection. Document databases allow indexing of documents on the basis of not only its primary identifier but also its properties. A few different open-source document databases are available today but the most prominent among the available options are MongoDB and CouchDB.

MongoDB

  • Official Online Resources — www.mongodb.org.
  • History — Created at 10gen.
  • Technologies and Language — Implemented in C++.
  • Access Methods — A JavaScript command-line interface. Drivers exist for a number of languages including C, C#, C++, Erlang. Haskell, Java, JavaScript, Perl, PHP, Python, Ruby, and Scala.
  • Query Language — SQL-like query language.
  • Open-Source License — GNU Affero GPL (http://gnu.org/licenses/agpl-3.0.html).
  • Who Uses It — FourSquare, Shutterfly, Intuit, Github, and more.

CouchDB

  • Official Online Resources — http://couchdb.apache.org and www.couchbase. Most of the authors are part of Couchbase, Inc.
  • History — Work started in 2005 and it was incubated into Apache in 2008.
  • Technologies and Language — Implemented in Erlang with some C and a JavaScript execution environment.
  • Access Methods — Upholds REST above every other mechanism. Use standard web tools and clients to access the database, the same way as you access web resources.
  • Open-Source License — Apache License version 2.
  • Who Uses It — Apple, BBC, Canonical, Cern, and more athttp://wiki.apache.org/couchdb/CouchDB_in_the_wild.

A lot of details on document databases are covered starting in the next chapter.

  • Graph Databases

So far I have listed most of the mainstream open-source NoSQL products. A few other products like Graph databases and XML data stores could also qualify as NoSQL databases. This book does not cover Graph and XML databases. However, I list the two Graph databases that may be of interest and something you may want to explore beyond this book: Neo4j and FlockDB:

Neo4J is an ACID-compliant graph database. It facilitates rapid traversal of graphs.

Neo4j

  • Official Online Resources — http://neo4j.org.
  • History — Created at Neo Technologies in 2003. (Yes, this database has been around before the term NoSQL was known popularly.)
  • Technologies and Language — Implemented in Java.
  • Access Methods — A command-line access to the store is provided. REST interface also available. Client libraries for Java, Python, Ruby, Clojure, Scala, and PHP exist.
  • Query Language — Supports SPARQL protocol and RDF Query Language.
  • Open-Source License — AGPL.
  • Who Uses It — Box.net.

FlockDB

  • Official Online Resources — https://github.com/twitter/flockdb
  • History — Created at Twitter and open sourced in 2010. Designed to store the adjacency lists for followers on Twitter.
  • Technologies and Language — Implemented in Scala.
  • Access Methods — A Thrift and Ruby client.
  • Open-Source License — Apache License version 2.
  • Who Uses It — Twitter.

A number of NoSQL products have been covered so far. Hopefully, it has warmed you up to learn more about these products and to get ready to understand how you can leverage and use them effectively in your stack.

PS:

This article is from book <Professional NoSQL>.

 

Categories: Clouding, Databases, IT Architecture Tags:

sorted ordered column-oriented stores VS key/value stores in NoSQL

March 27th, 2014 No comments
  • sorted ordered column-oriented stores

Google’s Bigtable espouses a model where data in stored in a column-oriented way. This contrasts with the row-oriented format in RDBMS. The column-oriented storage allows data to be stored effectively. It avoids consuming space when storing nulls by simply not storing a column when a value doesn’t exist for that column.

Each unit of data can be thought of as a set of key/value pairs, where the unit itself is identified with the help of a primary identifier, often referred to as the primary key. Bigtable and its clones tend to call this primary key the row-key. Also, as the title of this subsection suggests, units are stored in an ordered-sorted manner. The units of data are sorted and ordered on the basis of the row-key. To explain sorted ordered column-oriented stores, an example serves better than a lot of text, so let me present an example to you. Consider a simple table of values that keeps information about a set of people. Such a table could have columns like first_name, last_name, occupation, zip_code, and gender. A person’s information in this table could be as follows:

first_name: John
last_name: Doe
zip_code: 10001
gender: male

Another set of data in the same table could be as follows:

first_name: Jane
zip_code: 94303

The row-key of the first data point could be 1 and the second could be 2. Then data would be stored in a sorted ordered column-oriented store in a way that the data point with row-key 1 will be stored before a data point with row-key 2 and also that the two data points will be adjacent to each other.

Next, only the valid key/value pairs would be stored for each data point. So, a possible column-family for the example could be name with columns first_name and last_name being its members. Another column-family could be location with zip_code as its member. A third column-family could be profile. The gender column could be a member of the profile column-family. In column-oriented stores similar to Bigtable, data is stored on a column-family basis. Column-families are typically defined at configuration or startup time. Columns themselves need no a-priori definition or declaration. Also, columns are capable of storing any data types as far as the data can be persisted to an array of bytes.

So the underlying logical storage for this simple example consists of three storage buckets: name, location, and profile. Within each bucket, only key/value pairs with valid values are stored. Therefore, the name column-family bucket stores the following values:

For row-key: 1

first_name: John
last_name: Doe

For row-key: 2

first_name: Jane

The location column-family stores the following:

For row-key: 1

zip_code: 10001

For row-key: 2

zip_code: 94303

The profile column-family has values only for the data point with row-key 1 so it stores only the following:

For row-key: 1

gender: male

In real storage terms, the column-families are not physically isolated for a given row. All data pertaining to a row-key is stored together. The column-family acts as a key for the columns it contains and the row-key acts as the key for the whole data set.

Data in Bigtable and its clones is stored in a contiguous sequenced manner. As data grows to fill up one node, it is spilt into multiple nodes. The data is sorted and ordered not only on each node but also across nodes providing one large continuously sequenced set. The data is persisted in a fault-tolerant manner where three copies of each data set are maintained. Most Bigtable clones leverage a distributed filesystem to persist data to disk. Distributed filesystems allow data to be stored among a cluster of machines.

The sorted ordered structure makes data seek by row-key extremely efficient. Data access is less random and ad-hoc and lookup is as simple as finding the node in the sequence that holds the data. Data is inserted at the end of the list. Updates are in-place but often imply adding a newer version of data to the specific cell rather than in-place overwrites. This means a few versions of each cell are maintained at all times. The versioning property is usually configurable.

A bullet-point enumeration of some of the Bigtable open-source clones’ properties is listed next.

  • HBase

Official Online Resources — http://hbase.apache.org.
History — Created at Powerset (now part of Microsoft) in 2007. Donated to the Apache foundation before Powerset was acquired by Microsoft.
Technologies and Language — Implemented in Java.
Access Methods — A JRuby shell allows command-line access to the store. Thrift, Avro, REST, and protobuf clients exist. A few language bindings are also available. A Java API is available with the distribution.
Query Language — No native querying language. Hive (http://hive.apache.org) provides a SQL-like interface for HBase.
Open-Source License — Apache License version 2.
Who Uses It — Facebook, StumbleUpon, Hulu, Ning, Mahalo, Yahoo!, and others.

  • Hypertable

Official Online Resources — www.hypertable.org.
History — Created at Zvents in 2007. Now an independent open-source project.
Technologies and Language — Implemented in C++, uses Google RE2 regular expression library. RE2 provides a fast and efficient implementation. Hypertable promises performance boost over HBase, potentially serving to reduce time and cost when dealing with large amounts of data.
Access Methods — A command-line shell is available. In addition, a Thrift interface is supported. Language bindings have been created based on the Thrift interface. A creative developer has even created a JDBC-compliant interface for Hypertable.
Query Language — HQL (Hypertable Query Language) is a SQL-like abstraction for querying Hypertable data. Hypertable also has an adapter for Hive.
Open-Source License — GNU GPL version 2.
Who Uses It — Zvents, Baidu (China’s biggest search engine), Rediff (India’s biggest portal).

  • Cloudata

Official Online Resources — www.cloudata.org/.
History — Created by a Korean developer named YK Kwon (www.readwriteweb.com/hack/2011/02/open-source-bigtable-cloudata.php). Not much is publicly known about its origins.
Technologies and Language — Implemented in Java.
Access Methods — A command-line access is available. Thrift, REST, and Java API are available.
Query Language — CQL (Cloudata Query Language) defines a SQL-like query language.
Open-Source License — Apache License version 2.
Who Uses It — Not known.

Sorted ordered column-family stores form a very popular NoSQL option. However, NoSQL consists of a lot more variants of key/value stores and document databases. Next, I introduce the key/value stores.

  • key/value stores

A HashMap or an associative array is the simplest data structure that can hold a set of key/value pairs. Such data structures are extremely popular because they provide a very efficient, big O(1) average algorithm running time for accessing data. The key of a key/value pair is a unique value in the set and can be easily looked up to access the data.

Key/value pairs are of varied types: some keep the data in memory and some provide the capability to persist the data to disk. Key/value pairs can be distributed and held in a cluster of nodes.

A simple, yet powerful, key/value store is Oracle’s Berkeley DB. Berkeley DB is a pure storage engine where both key and value are an array of bytes. The core storage engine of Berkeley DB doesn’t attach meaning to the key or the value. It takes byte array pairs in and returns the same back to the calling client. Berkeley DB allows data to be cached in memory and flushed to disk as it grows. There is also a notion of indexing the keys for faster lookup and access. Berkeley DB has existed since the mid-1990s. It was created to replace AT&T’s NDBM as a part of migrating from BSD 4.3 to 4.4. In 1996, Sleepycat Software was formed to maintain and provide support for Berkeley DB.

Another type of key/value store in common use is a cache. A cache provides an in-memory snapshot of the most-used data in an application. The purpose of cache is to reduce disk I/O. Cache systems could be rudimentary map structures or robust systems with a cache expiration policy. Caching is a popular strategy employed at all levels of a computer software stack to boost performance. Operating systems, databases, middleware components, and applications use caching.

Robust open-source distributed cache systems like EHCache (http://ehcache.org/) are widely used in Java applications. EHCache could be considered as a NoSQL solution. Another caching system popularly used in web applications is Memcached (http://memcached.org/), which is an open-source, high-performance object caching system. Brad Fitzpatrick created Memcached for LiveJournal in 2003. Apart from being a caching system, Memcached also helps effective memory management by creating a large virtual pool and distributing memory among nodes as required. This prevents fragmented zones where one node could have excess but unused memory and another node could be starved for memory.

As the NoSQL movement has gathered momentum, a number of key/value pair data stores have emerged. Some of these newer stores build on the Memcached API, some use Berkeley DB as the underlying storage, and a few others provide alternative solutions built from scratch.

Many of these key/value pairs have APIs that allow get-and-set mechanisms to get and set values. A few, like Redis (http://redis.io/), provide richer abstractions and powerful APIs. Redis could be considered as a data structure server because it provides data structures like string (character sequences), lists, and sets, apart from maps. Also, Redis provides a very rich set of operations to access data from these different types of data structures.

This book covers a lot of details on key/value pairs. For now, I list a few important ones and list out important attributes of these stores. Again, the presentation resorts to a bullet-point-style enumeration of a few important characteristics.

  • Membase (Proposed to be merged into Couchbase, gaining features from CouchDB after the creation of Couchbase, Inc.)

Official Online Resources — www.membase.org/.
History — Project started in 2009 by NorthScale, Inc. (later renamed as Membase). Zygna and NHN have been contributors since the beginning. Membase builds on Memcached and supports Memcached’s text and binary protocol. Membase adds a lot of additional features on top of Memcached. It adds disk persistence, data replication, live cluster reconfiguration, and data rebalancing. A number of core Membase creators are also Memcached contributors.
Technologies and Language — Implemented in Erlang, C, and C++.
Access Methods — Memcached-compliant API with some extensions. Can be a drop-in replacement for Memcached.
Open-Source License — Apache License version 2.
Who Uses It — Zynga, NHN, and others.

  • Kyoto Cabinet

Official Online Resources — http://fallabs.com/kyotocabinet/.
History — Kyoto Cabinet is a successor of Tokyo Cabinet (http://fallabs.com/tokyocabinet/). The database is a simple data file containing records; each is a pair of a key and a value. Every key and value are serial bytes with variable length.
Technologies and Language — Implemented in C++.
Access Methods — Provides APIs for C, C++, Java, C#, Python, Ruby, Perl, Erlang, OCaml, and Lua. The protocol simplicity means there are many, many clients.
Open-Source License — GNU GPL and GNU LGPL.
Who Uses It — Mixi, Inc. sponsored much of its original work before the author left Mixi to join Google. Blog posts and mailing lists suggest that there are many users but no public list is available.

  • Redis

Official Online Resources — http://redis.io/.
History — Project started in 2009 by Salvatore Sanfilippo. Salvatore created it for his startup LLOOGG (http://lloogg.com/). Though still an independent project, Redis primary author is employed by VMware, who sponsor its development.
Technologies and Language — Implemented in C.
Access Methods — Rich set of methods and operations. Can access via Redis command-line interface and a set of well-maintained client libraries for languages like Java, Python, Ruby, C, C++, Lua, Haskell, AS3, and more.
Open-Source License — BSD.
Who Uses It — Craigslist.

The three key/value pairs listed here are nimble, fast implementations that provide storage for real-time data, temporary frequently used data, or even full-scale persistence.

The key/value pairs listed so far provide a strong consistency model for the data it stores. However, a few other key/value pairs emphasize availability over consistency in distributed deployments. Many of these are inspired by Amazon’s Dynamo, which is also a key/value pair. Amazon’s Dynamo promises exceptional availability and scalability, and forms the backbone for Amazon’s distributed fault tolerant and highly available system. Apache Cassandra, Basho Riak, and Voldemort are open-source implementations of the ideas proposed by Amazon Dynamo.

Amazon Dynamo brings a lot of key high-availability ideas to the forefront. The most important of the ideas is that of eventual consistency. Eventual consistency implies that there could be small intervals of inconsistency between replicated nodes as data gets updated among peer-to-peer nodes. Eventual consistency does not mean inconsistency. It just implies a weaker form of consistency than the typical ACID type consistency found in RDBMS.

For now I will list the Amazon Dynamo clones and introduce you to a few important characteristics of these data stores.

  • Cassandra

Official Online Resources — http://cassandra.apache.org/.
History — Developed at Facebook and open sourced in 2008, Apache Cassandra was donated to the Apache foundation.
Technologies and Language — Implemented in Java.
Access Methods — A command-line access to the store. Thrift interface and an internal Java API exist. Clients for multiple languages including Java, Python, Grails, PHP, .NET. and Ruby are available. Hadoop integration is also supported.
Query Language — A query language specification is in the making.
Open-Source License — Apache License version 2.
Who Uses It — Facebook, Digg, Reddit, Twitter, and others.

  • Voldemort

Official Online Resources — http://project-voldemort.com/.
History — Created by the data and analytics team at LinkedIn in 2008.
Technologies and Language — Implemented in Java. Provides for pluggable storage using either Berkeley DB or MySQL.
Access Methods — Integrates with Thrift, Avro, and protobuf (http://code.google.com/p/protobuf/) interfaces. Can be used in conjunction with Hadoop.
Open-Source License — Apache License version 2.
Who Uses It — LinkedIn.

  • Riak

Official Online Resources — http://wiki.basho.com/.
History — Created at Basho, a company formed in 2008.
Technologies and Language — Implemented in Erlang. Also, uses a bit of C and JavaScript.
Access Methods — Interfaces for JSON (over HTTP) and protobuf clients exist. Libraries for Erlang, Java, Ruby, Python, PHP, and JavaScript exist.
Open-Source License — Apache License version 2.
Who Uses It — Comcast and Mochi Media.

All three — Cassandra, Riak and Voldemort — provide open-source Amazon Dynamo capabilities. Cassandra and Riak demonstrate dual nature as far their behavior and properties go. Cassandra has properties of both Google Bigtable and Amazon Dynamo. Riak acts both as a key/value store and a document database.

PS:

This article is from book <Professional NoSQL>.

Categories: Clouding, Databases, IT Architecture Tags:

oracle database tips – sql

October 16th, 2013 No comments

General

set timing on
set autotrace on explain; #explain automatically
set lines<ize> 80; #linesize
set wrap off #no wrap
set pagesize 24
set feedback off –show or not show * rows selected
set heading off –show or not show column name
set verify off –(not)show substitution before/after
set trimspool off #strip whitespaces for columns
set echo on #(not)show commands executed

set define on/off #enable/disable ampersand substitution

col file_name format a60; column
; #show last command
/ #redo last command

define fil= ‘/var/tmp/a.txt’
prompt Spooling to &fil
spool &fil
select * from tab;
spool off

sqlplus ‘sys/password@devdb as sysdba’ #first export ORACLE_HOME(no trailing /) and set tnsnames.ora
sqlplus /nolog #no initial connection
sqlplus>conn sys/password@sid as sysdba

user_ #current user’s, user_tables
all_ #current user’s, include granted
dba_ #all, dba_users, dba_indexes, dba_tables, dba_objects
dictionary #data dictionary’s data dictionary
v$fixed_table #dynamic view’s dynamic view
v$process #oracle server processes(does not include client side info)
select pid,serial#,spid,program from v$process; #pid, serial# is PID allocated from Oracle Software; spid is OS pid; program is OS process name mapped to Oracle Service process
select p.spid,s.sid,s.serial#,s.username,s.status,s.last_call_et,p.program,p.terminal,logon_time,module,s.osuser from V$process p,V$session s where s.paddr = p.addr and s.status = ‘ACTIVE’ and s.username like ‘%SYS%’; #all the active sessions for sys user
v$session #client info. paddr is v$process’s addr(server process addr, such as 00000000C4505F10); server is dedicated or shared.
select machine,osuser,program,process,command,server from v$session where machine = ‘centos-doxer’; #process is client’s OS PID(on centos-doxer for example)
V$SYSSTAT, V$SESSTAT, V$MYSTAT, V$SERVICE_STATS #System Statistics views

v$segment_statistics #Segment Statistics views. when parameter STATISTICS_LEVEL is typical/all. Three statistics report global cache activity at the object level, and you can use these to quickly pinpoint the segments in your database that are experiencing the highest levels of inter-instance activity: gc cr blocks received, gc current blocks received, and gc buffer busy.

show parameter db_create_online_log_dest; #if not set, then control files/online redo logs will multiplexed to the FRA and DATA disk group; if set, then the control files and online redo logs will be created in that location instead. Specifying multiple different destinations provides greater fault tolerance
SQL> show parameter db_create_file_dest; #+DATA

select component,current_size,min_size,max_size from V$MEMORY_DYNAMIC_COMPONENTS where current_size <> 0 #size of PGA/SGA/shared pool/large pool
SQL> alter system set control_files=’xxx’,’xxx’,’xxx’ scope=spfile #next reboot, show parameter control_files;
SQL> select * from v$controlfile; — control file information

SQL>create public synonym part_master for inv.part_master; #other users can use(if granted privileges), can across DBs<through db link>
SQL>drop public synonym part_master;
show parameter spfile; #oracle/grid
show parameter dump_dest;
show parameter asm_power_limit #power of RBAL(1 to 11, should be set to night)
SQL> alter system set asm_power_limit=5; #su – grid
SQL>alter tablespace tbs1 read only #drop table test; is ok, because drop only change the name in dictionary
SQL>alter user hr quota 500m/unlimited on <tablespace>;
SQL>alter session/system set … scope #spfile(next reboot)/memory(only current)/both(this is default)
SQL>create pfile=’xxx’ from spfile;
SQL>create spfile from pfile;
SQL>alter session set current_schema = hr;
SQL>select username from dba_users; #all users
SQL>select distinct owner from dba_objects; #schemas available

SQL>select username,ACCOUNT_STATUS from dba_users;
SQL>select * from global_name; #current database
SQL>select table_name from dba_all_tables where owner = ‘HR’;

SQL>select to_char(WMSYS.WM_CONCAT(dba_users.username)) user_list from dba_users where expiry_date-sysdate<15 and username not in (select username from dba_users where expiry_date<sysdate); #expire in 15 days

SQL>select table_name from user_tables;

SQL> select OWNER,TABLE_NAME from dba_all_tables t where t.TABLE_NAME like ‘OVS_OS_RESOURCE%’; #like ‘_VSOS_RESOURCE%’
SQL>select * from tab; #all tables and views for current user
SQL>select * from cat; #all tables and sequences for current user. cat is synonym of user_catalog

SQL>create sequence cust_id_seq start with 104501;
SQL>select cust_id_seq.nextval from dual; #104501, 104502

SQL>show user
SQL>create user “newuser” profile “default” identified by “newpassword” password expire default tablespace “users” temporary tablespace “temp” quota unlimited on temp quota unlimited on users account unlock;
SQL>grant “connect” to “newuser”; #connect is role, others are like DBA, resource, Imp_Full_Database, Select_catalog_role

create role “testrole”; #drop role xxx
grant connect to “testrole”
grant execute any procedure to “testrole”
grant select on “table_name” to “testrole”
#profile, password management
SQL> select * from v$pwfile_users; #list users with SYSDB or SYSOP or SYSASM privileges

create profile “newpolicy” failed_login_attempts 3 password_lock_time unlimited password_life_time 60 password_reuse_time 120 password_verify_function complex_password; complex_password is self-defined function
alter user newuser profile newpolicy;

SQL>alter user xxx account unlock; #alter user xxx account unlock identified by xxx;
SQL>password <account>;
SQL>GRANT SELECT ON hr.jobs to test; #with grant option,
SQL>grant all privileges to test; #object privileges(select, insert, update, delete单个named object)/system privileges(alter system/create user/create table, dba_sys_privs)
SQL>revoke create table from joe #not cascading
SQL>select * from session_privs;
SQL>select * from session_roles;

SQL>select owner, object_name, object_type from dba_objects where status ^= ‘VALID’;

select job_id from jobs where job_id like ‘SA_%’;
select job_id from jobs where job_id like ‘SA\_%’ escape ‘\’;
select last_name from employees where commission_pct is NULL; # =NULL is wrong

SQL>create table tab1 tablespace tbs1 as select * from tab2;
SQL>truncate table emp; #DDL, can not rollback or commit, but it’s quicker
SQL>create index id on obj(object_name);
SQL>alter table obj move; #invalid index
SQL>alter index id rebuild; #rebuild index
SQL>create tablespace tbs1 datafile ‘+DATA’ size 100m EXTENT MANAGEMENT LOCAL uniform size 1m;#OMF, oracle managed files. use uniform size to specify extent size(or use AUTOALLOCATE for system management)
SQL>alter database datafile ‘/u01/oradata/ora11g/example01.dbf’ autoextend on next 50M maxsize 5000M;
SQL> alter database datafile ‘+DATA/PROD/DATAFILE/users.259.679156903′ resize 10G;

SQL>create database link p6 connect to hr identified by oracle_4U using ‘p6′; select * from jobs@p6; #query across different hosts, cross query

SQL>delete from OVS.OVS_SITE where SITE_NAME=’test’; #where … and/or …; where … != …; where year not between 1936 and 1967; where year not in (’1987′, ’1989′)
select employee_id, last_name, phone_number from employees where last_name = ‘&LASTNAME’ or employee_id = &EMPNO; #ampersand(&) substitution
select first_name, last_name from employees where last_name like ‘%&&SEARCH%’ and first_name like ‘%&SEARCH%’; #ampersand substitution of two same item. && is the value
select first_name, job_id, &&col from employees where job_id in (‘MK_MAN’,’SA_MAN’) order by &col; #ampersand substitutes the query column, for example, ‘salary’ column
undefine col;
define; #can get info about db version, editor, user-defined variables, etc
define col=salary
SQL>select ‘a”s yes’ from dual; #use ‘ to escape ‘
SQL>select q’<a’s yes>’ as “test Q” from dual; # (round brackets), {curly braces}, [square brackets], <angle brackets>, use q’<>’ to escape
SQL> select employee_id,end_date – start_date as days from job_history;
SQL> select last_name from employees where hire_date < ’2009/12/31′;
NLS_DATE_FORMAT=yyyy/mm/dd hh24:mi:ss; NLS_LANG=american_america.AL32UTF8 -> National Language Support.
select * from V$NLS_PARAMETERS; for querying the current setting.
alter session set NLS_NUMERIC_CHARACTERS=’, ‘; #, as decimal point and use space as separator for thousands
SQL> alter session set NLS_DATE_FORMAT=’yyyy/mm/dd hh24:mi:ss’; #time format
SQL> select first_name, hire_date from employees where hire_date between ’1994/12/10′ and ’2009/12/31′;
select INSTANCE_NUMBER||’,’||INSTANCE_NAME||’,’||HOST_NAME from v$instance; #|| for concatenation, output will be like 1,devdb1,node1.us.test.com

alter table emp modify (comm number(4,2) default 0.05);
ALTER TABLE tablename DROP UNUSED COLUMNS;
SQL> insert into OVS.OVS_OS_RESOURCE as select * from OVS.OVS_OS_RESOURCE_BACKUP;
SQL>insert into timestamp_test values (sysdate); #insert into … values (‘xxx’);

SQL> select region_name from regions natural join countries where country_name=’Canada’; #ANSI natural join(no need to specify the column for joining). other ANSI join includes join table2 using/join table2 on xxx=xxx/left, right, full outer join table2 on xxx=xxx/cross join
select e.employee_id, e.department_id EMP_DEPT_ID, d.department_id DEPT_DEPT_ID, d.department_name from departments d left outer join employees e on (d.DEPARTMENT_ID=e.DEPARTMENT_ID) where d.department_name like ‘P%’; #left outer join, will return all values of the left table(departments). for the right table(employees), empty values will be returned
select e.last_name, d.department_name from departments d right outer join employees e on (e.department_id=d.department_id) where e.last_name like ‘G%’; #all employees will be returned, even include ones without department id
SQL>select a.name, a.birth_year,a.father, b.birth_year from family a, family b where a.father = b.name; #self join
select F1.name Mum, F3.name Dad, F2.name Child from family F1 join Family F2 on (F2.mother_id=F1.id) join family F3 on (F2.father_id=F3.id);
select count(*) from regions cross join countries; #cross join for multiplying, 25*4=100 rows

SQL>select … from … where prod_list_price = (select max(prod_list_price) from products); #subquery
SQL>select substr(prod_category,1,22) Category, substr(prod_name,1,39) Product,prod_list_price List from products p where prod_list_price > (select avg(prod_list_price) from products where p.prod_category = prod_category) order by substr(prod_category,1,22), prod_list_price desc; #corelated subquery. The main query retrieves the Category, Product, and List Price details for each product that is greater than the average list price of all products within its category. Data from the subquery’s product category is joined with the main query’s product category and referenced by the main query’s table alias.
select E.job_ID current_job, last_name||’ can earn twice their salary by changing jobs to: ‘||J.job_id options, E.salary current_salary, J.max_salary potential_max_salary from employees E join jobs J on (2*E.salary < J.max_salary) where E.salary>5000 order by last_name; #nonequijons
SQL>select * from x union select * from y; #without duplicates
SQL>select * from x union all select * from y; #with duplicates
SQL>select * from x intersect select * from y; #rows that exist in both tables
SQL>select * from x minus select * from y; #rows only in x table

not null/unique/primary key/foreign key/check/ #types of integrity constraints; deferrable/not deferrable
create table check_constraint_example (col1 char(1) constraint check_col1 check (col1 in (‘B’,’G’,’N’))); #check

Partitioned table #with each partition implemented in a different tablespace. #data partitioning. alter table <abc> drop partition <xyz>
create table sales (
prod_id number not null,
total_amount AS (quantity_sold * amount_sold), #virtual column
……
time_id date not null,
storage (initial 65536 minextents 1 maxextents 2147483645)
partition by range (time_id) #type of partitioning<range/list/hash/composite(subpartition)/reference<foreign key>/virtual column-based/interval> and the partition key
#partition by list (cov_type_cd) (partition cov_life values (‘term 64′,’UL’,’ADB’) tablespace cov_gib_ts, …..)
(partition sales_200701 values less than (to_date(’2007-02-01′,’YYYY-MM-DD’)) tablespace sales_ts_200501,
partition sales_200702 values less than (to_date(’2007-03-01′,’YYYY-MM-DD’)) tablespace sales_ts_200502,
……
partition sales_max values less than (maxvalue) tablespace sales_ts_max
)
);
#local partitioned indexes
create index sales_idx_l1 on sales (time_id) local (partition sales_idx_200501 tablespace sales_ts_idx_200501,……) #oracle will manage indexes now(can not remove/add indexes manully)
#Global Partitioned Indexes, if the data is partitioned by monthly ranges, then you can define global index partitioned by product. thus partition pruning cannot occur for the data partitions due to the predicates of a query, index partition pruning may still be possible with the global partition index. rebuild index after dropping a partition
Prefixed and Nonprefixed Partition Indexes #prefixed performs better. An index is prefixed when the leftmost column of the index key is the same as the leftmost column of the index partition key. the index partitions to be scanned will be based on pruned data partitions

Clustered tables: two tables with a close link between them can have their data stored together in a single block to improve join operations.

#Data compression
create table ….. compress;
alter table sales move compress; #uncompress

#Index Key Compression
create index comm_sr_prod_idx on commission (sales_rep_id, prod_id) compress 1; # index compression cannot be used on a unique index that has only one attribute. 1 is the number of prefixing (leading) columns to compress. In this case, you used a value of 1 to indicate that duplicate values of the first column, sales_rep_id, are to be removed.

#Parallel Processing
alter session enable parallel dml; #default for DDL(create/alter) and query, default disabled for DML(insert/update/delete/merge)
create table …… parallel;
select /*+ parallel (commission, 4) */ prod_id, sum(comm_amt), count(*) from commission group by prod_id; #parallel hint, 4 is the degree of parallelism(default is the number of cpus * PARALLEL_THREADS_PER_CPU)
#query rewrite & materialized view
create materialized view comm_prod_mv tablespace comm_prod_mv_ts storage (initial 50k next 50k) refresh complete next sysdate + 7 enable query rewrite as select sales_rep_id, prod_id, comm_date, count(*), sum(comm_amt) from commission group by sales_rep_id, prod_id, comm_date;

Structures

set serverout on size 10000 #print the server message
declare
l_counter number :=0; –l_counter number :=(12,2);
l_today date :=sysdate; –L_start_date := to_date(’09-JUN-91:13:01 ‘, ‘DD-MON-YY:HH24:MI’);
l_name varchar2(50);
l_seq_val number :=cust_id_seq.nextval; –assigns the next sequence
begin
l_counter :=l_counter+1;
l_name := ‘LUCYTHECAT’;
dbms_output.put_line (l_today);
end;
/ #run the procedure
show errors;
edit #edit above procedure

SQL>
declare
v_val varchar2(80);
v_val2 varchar2(60);
cursor get_data is
select VAL from test;
begin
open get_data;
fetch get_data into v_val;
dbms_output.put_line(v_val); –print only the first row
v_val2 := CASE v_val
when ‘abc’ then ‘is abc’
when ‘def’ then ‘is def’
else ‘neither abc nor def’
END; –case … END
dbms_output.put_line (v_val2);
close get_data;
EXCEPTION
when no_data_found then
raise_application_error (-20052,’sorry no data found’); –use -20000 to -20999
when others then
raise_application_error (-20999,’something has gone wrong’);
end
;
/

##LOOP
SQL> declare
v_val varchar2(80);
cursor get_data is
select VAL from test;
begin
for i in get_data –no need for open/close cursor
LOOP
dbms_output.put_line(i.VAL); –i is get_data cursor
end LOOP;
end;
/

LOOP
IF l_bank_balance >= 0 then EXIT;
ELSE
L_decision := ‘ACCOUNT OVERDRAWN’;
END IF;
END LOOP;
WHILE l_sales_total < 100000 LOOP
Select sales_amount into l_sale_amount from daily_sales;
l_sales_total := l_sales_total + l_sale_amount;
END LOOP;
declare
l_counter number;
i number;
begin
select count(*) into i from employee;
FOR l_counter IN 1 .. 10
LOOP
Statements;
END LOOP;
end;

##custom Exception
Declare
L_counter number := 0;
L_name employee.employee_name%type;
Cursor get_employee_name is
Select employee_name
From employee;
excep_old_friend Exception;
never_met_them Exception;
Begin
Open get_employee_name;
Fetch get_employee_name into l_name;
If l_name = ‘CARL DUDLEY’ then
Raise excep_old_friend;
Else –ELSIF … then …
Raise excep_never_met_them;
End if;
Close get_employee_name;
Exception
When excep_old_friend then
Dbms_output.put_line(‘I know this person’);
When excep_old_friend then
Dbms_output.put_line(‘I do not know this person’);
When others then
Dbms_output.put_line(‘Oracle Error: ‘ || sqlcode);
Dbms_output.put_line(‘Oracle Error message is: ‘ || sqlerrm);
End;

create or replace view emp_hq as select * from employee_master where loc_id = ’2′; #view. x$, v$, gv$(多了一列instance_id) #stored objects
Triggers cannot exist independently of an OracleDatabase 11g table. They are associated with one and only one table and, if a table is dropped, so is the trigger. #triggers
SQL> create or replace procedure printall #procedures
(first_me in varchar)
as
v_val varchar2(80);
cursor get_data is
select VAL from test where val like first_me || ‘%’;
begin
for i in get_data
LOOP
dbms_output.put_line(i.VAL);
end LOOP;
end;
/
SQL> execute printall (‘def’) #or just exec printall (‘def’)
SQL>describe printall
create or replace function GetProductTaxIn #functions, http://www.techonthenet.com/oracle/functions/
(in_product_id number)
return number
is
v_price number;
cursor get_price_tax is
select nvl(round(prod_list_price * 1.15,2),0) #nvl,replace null with the second parameter
from products
where prod_id = in_product_id;
begin
open get_price_tax;
fetch get_price_tax into v_price;
return v_price;
exception
when others then v_price := 0;
return v_price;
end;
select prod_id, prod_list_price, GetProductTaxIn(Prod_id) from products;
select prod_id, prod_list_price, GetProductTaxIn(Prod_id) from products where GetProductTaxIn(Prod_id) >= 500;

SQL>select owner, object_name, object_type from dba_objects where status ^= ‘VALID’; #packages, roll functions and procedures together

Oracle Functions

select supplier_id, NVL2(supplier_desc, supplier_name<if not null>, supplier_name2<if null>) from suppliers;
select NULLIF(12, 12) from dual; #null
select NULLIF(12, 22) from dual; #12
SQL> select coalesce(NULL,’not null’,NULL,’not’) from dual; #returns the first non-null
#Aggregation Functions
select … group by rollup(c.cust_gender, b.channel_class, to_char(a.time_id, ‘yyyy-mm’)); # generates subtotals for attributes specified in the group by clause, plus another row representing the grand total.
select …group by cube(c.cust_gender, b.channel_class, to_char(a.time_id, ‘yyyy-mm’)); #more detailed subtotals
select count(*), department_id from employees group by department_id order by department_id; #group functions -> avg, sum, min, max, count and distinct keyword
SQL> select count(distinct commission_pct) from employees;
SQL> select count(*), count(distinct nvl(department_id,0)), count(distinct nvl(job_id,0)) from employees; #107 employees,19 jobs, 12 departments(1 is null)
SQL> select sum(2) from employees; #107 rows will return 214
SQL> select sum(sysdate-hire_date)/365.25 “total years” from employees; #years of all employees in total
SQL>select avg(2) from employees; #2, adds the number 2 across 107 rows and divides the total by the number of rows
SQL> select avg(salary) from employees;
SQL> select avg(sum(commission_pct)) from employees where nvl(department_id,0) in (40,80,0) group by department_id; # AVG(SUM(COMMISSION_PCT)) = (0.15 + 7.65) /2 = 3.9 nested group functions can nest for at most 2
select max(salary), count(*) from employees group by department_id order by department_id; #ones with highest salary in each department
select to_char(end_date,’YYYY’) “year”, count(*) “Number of employees” from job_history group by to_char(end_date,’YYYY’) order by count(*) desc; #any item in the select list that is not a group function must be a grouping attribute of the GROUP BY clause
select department_id, job_id, sum(commission_pct) from employees where commission_pct is not null group by department_id, job_id; #group again by job_id in each department
select department_id, count(*) from job_history where department_id in (50,60,80,110) group by department_id having count(*)>1;
#Analysis Functions
select prod_id, sum(quantity_sold), rank () over (order by sum(quantity_sold) desc) as rank, dense_rank () over (order by sum(quantity_sold) desc) as dense_rank from sales where to_char(time_id, ‘yyyy-mm’) = ’2001-06′ group by prod_id; #if tie, rank will leave a gap(from 5 directly to 7), but dense_rank will be like 5 5 6

SQL>select TO_CHAR(123.02335,’FM9999999.0099′) FROM DUAL; #FM is to Removes leading and trailing spaces/zeros from number; this example allows 7 bits to the left of the decimal point, and for the right of the decimal point, two bits at least and four bits at most, also will do round for digits after the fifth bits.
SELECT TO_CHAR(1234567, ’999,999,999′) FROM dual; #1,234,567. you can use FM999,999,999 for striping zeros/spaces
SQL> select TO_NUMBER(‘$94 567,00′, ‘L999G999D00′, ‘NLS_CURRENCY =”$” NLS_NUMERIC_CHARACTERS=”, ”’) as result from dual; #pick up numbers from string
select to_char(sysdate,’Month’)||’ is a special time’ from dual; #September. mask is on above
select first_name from employees where to_char(hire_date,’FMDAY’)=’SATURDAY’; #fm, remove blanks.
SQL> select to_date(’25-DEC’, ‘DD-MON’) from dual; #25-DEC-13
SQL> select to_date(’25-DEC-10′, ‘fxDD-MON-YYYY’) from dual; #fx is for exact matching. as 10 is YY rather than YYYY, so this clause will report error. use 2010 instead will do.
SQL>select next_day(’03-NOV-1987′,’Wednesday’) from dual; #04-NOV-87

SQL> select to_char(00001,’0999999′)||’ is a special number’ from dual; #0000001
SQL> select to_char(00001,’999999′)||’ is a special number’ from dual; #1
Oracle Numeric Format Masksdate format masks
date format masks for time

Categories: Databases, Oracle DB, tips Tags:

change oracle to use ASM for archive log destination

October 15th, 2013 No comments

Now our DB is using filesystem for storing archive log:

/u01/app/oracle/product/11.2.0.3/dbhome_1/dbs/arch1_117_826841548.dbf
……

First, let’s check available DGs:

[oracle@test03 ~]$ export ORACLE_HOME=/u01/app/11.2.0.3/grid
[oracle@test03 ~]$ export ORACLE_SID=+ASM1
[oracle@test03 ~]$ $ORACLE_HOME/bin/sqlplus / as sysasm
SQL> select name,state from v$asm_diskgroup ;

NAME STATE
—————————— ———–
DA_SLCM07 MOUNTED #for archive log
DBFS_DG MOUNTED #for mounting RAM disk from DB
RE_SLCM07 MOUNTED #for backup

Now we can change archive log destination:

[oracle@test03 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
[oracle@test03 ~]$ export ORACLE_SID=oimdb31
[oracle@test03 ~]$ $ORACLE_HOME/bin/sqlplus / as sysdba
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/product/11.2.0.3/dbhome_1/dbs/arch
Oldest online log sequence 6
Next log sequence to archive 8
Current log sequence 8

SQL> alter system set log_archive_dest_1=’location=+DA_SLCM07′ scope=both; #On rac, you need only change this once

After this, you can remove old archive log on the filesystem.

Categories: Databases, Oracle DB Tags:

Configuring Active/Passive Clustering for Apache Tomcat in Oracle RAC

October 1st, 2013 1 comment

Note: this is from book <Pro Oracle Database 11g RAC on Linux>

A slightly more complex example involves making Apache Tomcat or another web-accessible
application highly available. The difference in this setup compared to the database setup described in
the previous chapter lies in the fact that you need to use a floating virtual IP address. Floating in this
context means that the virtual IP address moves jointly with the application. Oracle calls its
implementation of a floating VIP an application VIP. Application VIPs were introduced in Oracle
Clusterware 10.2. Previous versions only had a node VIP.
The idea behind application VIPs is that, in the case of a node failure, both VIP and the application
migrate to the other node. The example that follows makes Apache Tomcat highly available, which is
accomplished by installing the binaries for version 6.0.26 in /u01/tomcat on two nodes in the cluster. The
rest of this section outlines the steps you must take to make Apache Tomcat highly available.
Oracle Grid Infrastructure does not provide an application VIP by default, so you have to create one.
A new utility, called appvipcfg, can be used to set up an application VIP, as in the following example:

[root@london1 ~]# appvipcfg
Production Copyright 2007, 2008, Oracle.All rights reserved

Usage: appvipcfg create -network=<network_number> -ip=<ip_address> -vipname=<vipname>
-user=<user_name>[-group=<group_name>]
delete -vipname=<vipname>
[root@london1 ~]# appvipcfg create -network=1 \
> -ip 172.17.1.108 -vipname httpd-vip -user=root
Production Copyright 2007, 2008, Oracle.All rights reserved
2010-06-18 16:07:12: Creating Resource Type
2010-06-18 16:07:12: Executing cmd: /u01/app/crs/bin/crsctl add type app.appvip.type -basetype
cluster_resource -file /u01/app/crs/crs/template/appvip.type
2010-06-18 16:07:13: Create the Resource
2010-06-18 16:07:13: Executing cmd: /u01/app/crs/bin/crsctl add resource httpd-vip -type
app.appvip.type -attr USR_ORA_VIP=172.17.1.104,START_DEPENDENCIES=hard(ora.net1.network)

pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL=’owner:root:rwx,pgrp:roo
t:r-x,other::r–,user:root:r-x’

The preceding output shows that the new resource has been created, and it is owned by root
exclusively. You could use crsctl setperm to change the ACL, but this is not required for this process.
Bear in mind that no account other than root can start the resource at this time. You can verify the result
of this operation by querying the resource just created. Note how the httpd-vip does not have an ora.
prefix:

[root@london1 ~]# crsctl status resource httpd-vip
NAME=httpd-vip
TYPE=app.appvip.type
TARGET=OFFLINE
STATE=OFFLINE

Checking the resource profile reveals that it matches the output of the appvipcfg command; the
output has been shortened for readability, and it focuses only on the most important keys (the other
keys were removed for the sake of clarity):

[root@london1 ~]# crsctl stat res httpd-vip –p
NAME=httpd-vip
TYPE=app.appvip.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r–,user:root:r-x
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
DEGREE=1
DESCRIPTION=Application VIP
RESTART_ATTEMPTS=0
SCRIPT_TIMEOUT=60
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
STOP_DEPENDENCIES=hard(ora.net1.network)
USR_ORA_VIP=172.17.1.108
VERSION=11.2.0.1.0

The dependencies on the network ensure that, if the network is not started, it will be started as part
of the VIP start. The resource is controlled by the CRSD orarootagent because changes to the network
configuration require root privileges in Linux. The status of the resource revealed it was stopped; you
can use the following command to start it:

[root@london1 ~]# crsctl start res httpd-vip
CRS-2672: Attempting to start ‘httpd-vip’ on ‘london2′
CRS-2676: Start of ‘httpd-vip’ on ‘london2′ succeeded
[root@london1 ~]#

In this case, Grid Infrastructure decided to start the resource on server london2.

[root@london1 ~]# crsctl status resource httpd-vip
NAME=httpd-vip
TYPE=app.appvip.type
TARGET=ONLINE
STATE=ONLINE on london2

You can verify this by querying the network setup, which has changed. The following output is again
shortened for readability:

[root@london2 source]# ifconfig

eth0:3 Link encap:Ethernet HWaddr 00:16:36:2B:F2:F6
inet addr:172.17.1.108 Bcast:172.17.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Next, you need an action script that controls the Tomcat resource. Again, the requirement is to
implement start, stop, clean, and check functions in the action script. The Oracle documentation lists
C, C++, and shell scripts as candidate languages for an action script. We think that the action script can
be any executable, as long as it returns 0 or 1, as required by Grid Infrastructure. A sample action script
that checks for the Tomcat webserver could be written in plan bash, as in the following example:

#!/bin/bash

export CATALINA_HOME=/u01/tomcat
export ORA_CRS_HOME=/u01/app/crs
export JAVA_HOME=$CRS_HOME/jdk
export CHECKURL=”http://172.17.1.108:8080/tomcat-power.gif”

case $1 in
‘start’)
$CATALINA_HOME/bin/startup.sh
RET=$?
;;
‘stop’)
$CATALINA_HOME/bin/shutdown.sh
RET=$?
;;
‘clean’)
$CATALINA_HOME/bin/shutdown.sh
RET=$?
;;
‘check’)
# download a simple, small image from the tomcat server
/usr/bin/wget -q –delete-after $CHECKURL
RET=$?
;;
*)
RET=0
;;
esac
# A 0 indicates success, return 1 for an error.
if [ $RET -eq 0 ]; then

exit 0
else
exit 1
fi

In our installation, we created a $GRID_HOME/hadaemon/ directory on all nodes in the cluster to save
the Tomcat action script, tomcat.sh.
The next step is to ensure that the file is executable, which you can accomplish by running test to
see whether the file works as expected. Once you are confident that the script is working, you can add
the Tomcat resource.
The easiest way to configure the new resource is by creating a text file with the required attributes,
as in this example:

[root@london1 hadaemon]# cat tomcat.profile
ACTION_SCRIPT=/u01/app/crs/hadaemon/tomcat.sh
PLACEMENT=restricted
HOSTING_MEMBERS=london1 london2
CHECK_INTERVAL=30
RESTART_ATTEMPTS=2
CHECK_INTERVAL=30
RESTART_ATTEMPTS=2
START_DEPENDENCIES=hard(httpd-vip)
STOP_DEPENDENCIES=hard(httpd-vip)

The following command registers the resource tomcat in Grid Infrastructure:

[root@london1 ~]# crsctl add resource tomcat –type cluster_resource -file tomcat.profile

Again, the profile registered matches what has been defined in the tomcat.profile file, plus the
default values:

[root@london1 hadaemon]# crsctl status resource tomcat –p
NAME=tomcat
TYPE=cluster_resource
ACL=owner:root:rwx,pgrp:root:r-x,other::r–
ACTION_SCRIPT=/u01/app/crs/hadaemon/tomcat.sh
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=30
DEFAULT_TEMPLATE=
DEGREE=1
DESCRIPTION=
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=london1 london2
LOAD=1
LOGGING_LEVEL=1

NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=2
SCRIPT_TIMEOUT=60
SERVER_POOLS=
START_DEPENDENCIES=hard(httpd-vip)
START_TIMEOUT=0
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=hard(httpd-vip)
STOP_TIMEOUT=0
UPTIME_THRESHOLD=1h

This example includes a hard dependency on the httpd-vip resource, which is started now. If you
try to start the Tomcat resource, you will get the following error:

[root@london1 hadaemon]# crsctl start resource tomcat
CRS-2672: Attempting to start ‘tomcat’ on ‘london1′
CRS-2674: Start of ‘tomcat’ on ‘london1′ failed
CRS-2527: Unable to start ‘tomcat’ because it has a ‘hard’ dependency
on ‘httpd-vip’
CRS-2525: All instances of the resource ‘httpd-vip’ are already running;
relocate is not allowed because the force option was not specified
CRS-4000: Command Start failed, or completed with errors.

To get around this problem, you need begin by shutting down httpd-vip and then trying again:

[root@london1 hadaemon]# crsctl stop res httpd-vip
CRS-2673: Attempting to stop ‘httpd-vip’ on ‘london1′
CRS-2677: Stop of ‘httpd-vip’ on ‘london1′ succeeded
[root@london1 hadaemon]# crsctl start res tomcat
CRS-2672: Attempting to start ‘httpd-vip’ on ‘london1′
CRS-2676: Start of ‘httpd-vip’ on ‘london1′ succeeded
CRS-2672: Attempting to start ‘tomcat’ on ‘london1′
CRS-2676: Start of ‘tomcat’ on ‘london1′ succeeded

The Tomcat servlet and JSP container is now highly available. However, please bear in mind that the
session state of an application will not fail over to the passive node in the case of a node failure. The
preceding example could be further enhanced by using a shared cluster logical ACFS volume to store the
web applications used by Tomcat, as well as and the Tomcat binaries themselves.

Categories: HA, Oracle DB Tags:

oracle RAC startup sequence in details

October 1st, 2013 No comments

Note: this is from book <Pro Oracle Database 11g RAC on Linux>

The startup sequence in Grid Infrastructure is not completely documented in the Oracle Clusterware
Administration Guide 11.2. Therefore, this section will elaborate on some of the more elaborate aspects
of managing that sequence.
The init process, father of all processes in Linux, spawns OHAS. This occurs in two stages: first, the
/etc/init.d/init.ohasd script is invoked with the run argument. Second, this script then calls the
/etc/init.d/ohasd script, which starts $GRID_HOME/bin/ohasd.bin. The init scripts log potential problems
using the syslog facility present on all Linux systems. The ohasd.bin executable uses
$GRID_HOME/log/hostname/ohasd/ohasd.log to report its activities. All processes mentioned so far run

with root privileges. If the administrator has disabled the Grid Infrastructure, then the Grid
Infrastructure high availability stack must be started manually; otherwise, the startup sequence
continues.
The ohasd.bin process then spawns four processes, all of which are located in $GRID_HOME/bin. Note
that that Grid Infrastructure can be installed to an operating system account other than oracle. This
chapter refers to the owner of the software stack as the Grid software owner. This chapter also assumes
that the RAC binaries were installed as the oracle user. Finally, this chapter assumes that $LOG_HOME
points to $ORACLE_HOME/log/hostname
•oraagent.bin, started as the Grid software owner
•cssdmonitor, started as root
•cssdagent, started as root
•orarootagent.bin, started as root

It is important to remember that these processes are created by ohasd.bin, rather than the CRS
daemon process, which has not been created yet. Next, the Oracle Root Agent starts the following
executables, which are also located in $GRID_HOME/bin:
•crsd.bin: started as root
•diskmon.bin: started as the Grid software owner
•octssd.bin: started as root

The Oracle Agent (started as the Grid software owner) in turn will start these executables:
•evmd.bin: started as the Grid software owner
•evmlogger.bin: started as the Grid software owner
•gipcd.bin: started as the Grid software owner
•gpnpd.bin: started as the Grid software owner
•mdnsd.bin: started as the Grid software owner

The cssdagent executable is responsible for starting ocssd.bin, which runs as the Grid software
owner. The cssdagent executable doesn’t spawn additional processes.
Once the CRS daemon is created by the OHAS’s Oracle Root Agent, the Cluster Ready Services stack
will be started. The following actions depend on CRS to create additional Oracle Agents (owned by the
Grid software owner and oracle) and another Oracle Root Agent. Again, it is important to note the
distinction between these agents and the ones created by OHAS. You will also see that they are different
because their log files are located in $LOG_HOME/agent/crsd/ rather than $LOG_HOME/agent/ohasd. You will
see the following processes spawned by crsd.bin:
•oraagent.bin: started as the Grid software owner
•oraagent.bin: started as oracle
•oraarootgent.bin: started as root
These agents are henceforth responsible for continuing the start process. The Grid software owner’s
Oracle Agent (oraagent.bin) will start the following infrastructure components, all as the Grid Software
owner:

Clustered ASM instance
•ons
•enhanced ONS (eONS), a Java process
•tnslsnr: a SCAN listener
•tnslsnr: a node listener

The ONS binary is an exception because it is not started from $GRID_HOME/bin, but from
$GRID_HOME/opmn/bin. The enhanced ONS service is a Java process. Therefore, it doesn’t start from
$GRID_HOME/bin, either; however, its JAR files are located there.
The oracle Oracle Agent oraagent.bin will start the database and services associated with the
database resource, as defined in the Oracle Cluster Registry.
The Oracle Root Agent will start the following resources:
•The network resource
•The SCAN virtual IP address
•The Node virtual IP address
•The GNS virtual IP address if GNS is configured
•The GNS daemon if the cluster is configured to use GNS
•The ASM Cluster File System Registry

PS:

Here’s a picture of clusterware:

clusterware

Categories: Databases, Oracle DB Tags: ,

The history of Oracle RAC

September 27th, 2013 No comments

NOTE: This is from book <Pro Oracle Database 11g RAC on Linux>

Oracle RAC—though branded as an entirely new product when released with Oracle 9i Release 1—
has a long track record. Initially known as Oracle Parallel Server (OPS), it was introduced with Oracle
6.0.35, which eventually was renamed Oracle 6.2. OPS was based on the VAX/VMS distributed lock
manager because VAX/VMS machines essentially were the only clustered computers at the time;
however, the DLM used proved too slow for OPS due to internal design limitations. So Oracle
development wrote its own distributed lock manager, which saw the light of day with Oracle 6.2 for
Digital.
The OPS code matured well over time in the Oracle 7, 8, and 8i releases. You can read a remarkable
story about the implementation of OPS in Oracle Insights: Tales of the Oak Table (Apress, 2004).
Finally, with the advent of Oracle 9.0.1, OPS was relaunched as Real Application Clusters, and it
hadn’t been renamed since. Oracle was available on the Linux platform prior to 9i Release 1, but at that
time no standard enterprise Linux distributions as we know them today were available. Linux—even
though very mature by then—was still perceived to be lacking in support, so vendors such as Red Hat
and SuSE released road maps and support for their distributions alongside their community versions. By
2001, these platforms emerged as stable and mature, justifying the investment by Oracle and other big
software players, who recognized the potential behind the open source operating system. Because it

runs on almost all hardware, but most importantly on industry-standard components, Linux offers a
great platform and cost model for running OPS and RAC.
At the time the name was changed from OPS to RAC, marketing material suggested that RAC was an
entirely new product. However, RAC 9i was not entirely new at the time; portions of its code were
leveraged from previous Oracle releases.
That said, there was a significant change between RAC and OPS in the area of cache coherency. The
basic dilemma any shared-everything software has to solve is how to limit access to a block at a time. No
two processes can be allowed to modify the same block at the same time; otherwise, a split brain
situation would arise. One approach to solving this problem is to simply serialize access to the block.
However, that would lead to massive contention, and it wouldn’t scale at all. So Oracle’s engineers
decided to coordinate multiple versions of a block in memory across different instances. At the time,
parallel cache management was used in conjunction with a number of background processes (most
notably the distributed lock manager, DLM). Oracle ensured that a particular block could only be
modified by one instance at a time, using an elaborate system of locks. For example, if instance B needed
a copy of a block instance A modified, then the dirty block had to be written to disk by instance A before
instance B could read it. This was called block pinging, which tended to be slow because it involved disk
activity. Therefore, avoiding or reducing block pinging was one of Oracle’s design goals when tuning and
developing OPS applications; a lot of effort was spent on ensuring that applications connecting to OPS
changed only their own data.

The introduction of Cache Fusion phase I in Oracle 8i proved a significant improvement. Block
pings were no longer necessary for consistent read blocks and read-only traffic. However, they were still
needed for current reads. The Cache Fusion architecture reduced the need to partition workload to
instances. The Oracle 8.1.5 “New Features” guide states that changes to the interinstance traffic
includes:

“… a new diskless ping architecture, called cache fusion, that provides copies of blocks
directly from the holding instance’s memory cache to the requesting instance’s
memory cache. This functionality greatly improves interinstance communication.
Cache fusion is particularly useful for databases where updates and queries on the
same data tend to occur simultaneously and where, for whatever reason, the data and
users have not been isolated to specific nodes so that all activity can take place on a
single instance. With cache fusion, there is less need to concentrate on data or user
partitioning by instance.”

In Oracle 9i Release 1, Oracle finally implemented Cache Fusion phase II, which uses a fast, high
speed interconnect to provide cache-to-cache transfers between instances, completely eliminating disk
IO and optimizing read/write concurrency. Finally, blocks could be shipped across the interconnect for
current and consistent reads.

Oracle addressed two general weaknesses of its Linux port with RAC 9.0.1: previous versions lacked
a cluster manager and a cluster file system. With Oracle 9i, Oracle shipped its cluster manager, called
OraCM for Linux and Windows NT (all other platforms used a third-party cluster manager). OraCM
provided a global view of the cluster and all nodes in it. It also controlled cluster membership, and it
needed to be installed and configured before the actual binaries for RAC could be deployed.
Cluster configuration was stored in a sever-management file on shared storage, and cluster
membership was determined by using a quorum file or partition (also on shared storage).
Oracle also initiated the Oracle Cluster File System (OCFS) project for Linux 2.4 kernels
(subsequently OCFS2 has been developed for 2.6 kernels, see below); this file system is released under
the GNU public license. OCFS version one was not POSIX compliant; nevertheless, it allowed users to
store Oracle database files such as control files, online redo logs, and database files. However, it was not
possible to store any Oracle binaries in OCFS for shared Oracle homes. OCFS partitions are configured
just like normal file systems in the configuration file. Equally, they are reported like an
ordinary mount point in output of the command. The main drawback was the inherent
fragmentation that could not be defragmented, except by reformatting the file system.

With the release of Oracle 10.1, Oracle delivered significant improvements in cluster manageability,
many of which have already been discussed. Two of the main new features were Automatic Storage
Management and Cluster Ready Services (which was renamed to Clusterware with 10.2 and 11.1, and is
now called Grid Infrastructure). The ORACM cluster manager, which was available for Linux and
Windows NT only, has been replaced by the Cluster Ready Services feature, which now offers the same
“feel” for RAC on every platform. The server-management file has been replaced by the Oracle Cluster
Registry, whereas the quorum disk is now known as the voting disk. With 10g Release 2, voting disks
could be stored at multiple locations to provide further redundancy in case of logical file corruption. In
10.1, the files could only reside on raw devices; since 10.2, they can be moved to block devices, as well.
The Oracle 11.1 installer finally allows the placement of the Oracle Cluster Registry and voting disks on
block devices without also having to use raw devices. Raw devices have been deprecated in the Linux
kernel in favor of the O_DIRECT flag. With Grid Infrastructure 11.2, the voting disk and cluster registry
should be stored in ASM, and they are only allowed on block/raw devices during the migration phase.
ASM is a clustered logical volume manager that’s available on all platforms and is Oracle’s preferred
storage option—in fact, you have to use ASM with RAC Standard Edition.
In 2005, Oracle released OCFS2, which was now finally POSIX compliant and much more feature
rich. It is possible to install Oracle binaries on OCFS2, but the binaries have to reside on a different
partition than the datafiles because different mount options are required. It is no longer possible to
install Grid Infrastructure, the successor to Clusterware, as a shared Oracle home on OCFS2; however, it
is possible to install the RDBMS binaries on OCFS2 as a shared Oracle home.
Since the introduction of RAC, we’ve seen the gradual change from SMP servers to hardware, based
on the industry-standard x86 and x86-64 architectures. Linux has seen great acceptance in the industry,
and it keeps growing, taking market share mainly from the established UNIX systems, such as IBM’s AIX,
HP-UX, and Sun Solaris. With the combined reduced costs for the hardware and the operating system,
RAC is an increasingly viable option for businesses.

 

Categories: Databases, Oracle DB Tags:

oracle rac on linux installation

August 6th, 2013 No comments

1,2 -> database; 3-> grid; 4 -> client; 5 -> gateways; 6 -> examples; 7 -> deinstall

---private IP, eth1
192.168.13.86/24
192.168.13.87/24

—public ip, eth0
10.240.149.109/21
10.240.149.110/21

—vip #vip alias will auto install, should in same subnet with public ip/scan ip
10.240.149.107/21
10.240.149.108/21

—scan IP #scan ip should be defined in dns. multiple IPs can map to one scan-name(round-robin)
10.240.149.106

—/etc/hosts
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.13.86 node1-priv.localdomain node1-priv
192.168.13.87 node2-priv.localdomain node2-priv

PS:
hostname should return node1.us.test.com

—DNS configuration

yum install bind bind-chroot caching-nameserver
cd /var/named/chroot/etc
cp -p named.caching-nameserver.conf named.conf
options {
listen-on port 53 { any; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";

allow-query { any; };
allow-query-cache { any; };
};
logging {
channel default_debug {
file “data/named.run”;
severity dynamic;
};
};
view localhost_resolver {
match-clients { any; };
match-destinations { any; };
recursion no;
include “/etc/named.rfc1912.zones”;
};
vi named.rfc1912.zones #in the end

zone “us.test.com” IN {
type master;
file “us.test.com.zone” #dns -> ip
allow-update { none; };
};

zone “149.240.10.in-addr.arpa.” IN {
type master;
file “149.240.10.in-addr.arpa”; #ip -> dns, reverse dns
allow-update { none; };
};
cd /var/named/chroot/var/named/
vi us.test.com.zone #in the end

scan-cluster IN A 10.240.149.106
node1 IN A 10.240.149.109
node1-vip IN A 10.240.149.107
node2 IN A 10.240.149.110
node2-vip IN A 10.240.149.108

 

cp -p named.local 149.240.10.in-addr.arpa
vi 149.240.10.in-addr.arpa #in the end

106 IN PTR scan-cluster.us.test.com. #if using scan, then multiple IPs should added here
107 IN PTR node1-vip.us.test.com.
108 IN PTR node2-vip.us.test.com.
109 IN PTR node1.us.test.com.
110 IN PTR node2.us.test.com.
/etc/init.d/named start
chkconfig named on

vi /etc/resolv.conf #on node1, node2
search us.test.com #on first line
nameserver 10.245.28.40

—#1_preuser.sh,

—2_predir.sh

—3_prelimits.sh

—4_prelogin.sh

—5_preprofile.sh

—6_presysctl.sh

—stop ntpd
service ntpd stop
chkconfig ntpd off
57 18 * * * /usr/sbin/ntpdate 10.245.72.1

—ssh config for oracle/grid(passwordless even forthe host  itself)

ssh-keygen -t rsa
cat .ssh/id_rsa.pub | tr -d '\r\n'
vi .ssh/authorized_keys

—config disks
allocate iscsi LUN
fdisk LUN(on one disk, no need to create filesystem)
yum install oracleasm-support oracleasm oracleasmlib #on two nodes
/usr/sbin/oracleasm status
/usr/sbin/oracleasm configure -i #grid/asmadmin/y/y
/usr/sbin/oracleasm init
/usr/sbin/oracleasm status
/usr/sbin/oracleasm configure
/usr/sbin/oracleasm listdisks #on one node
/usr/sbin/oracleasm createdisk VOL1 /dev/sda1
/usr/sbin/oracleasm createdisk VOL2 /dev/sdb1
/usr/sbin/oracleasm createdisk VOL3 /dev/sdc1
/usr/sbin/oracleasm createdisk VOL4 /dev/sdd1
/usr/sbin/oracleasm listdisks
/usr/sbin/oracleasm scandisks #on the other node
/usr/sbin/oracleasm listdisks
/usr/sbin/oracleasm querydisk /dev/sd*

—runcluvfy.sh
su – grid
cd grid
./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose #run -fixup with root

cvu_stages

—install cvuqdisk
rpm -Uvh grid/rpm/cvuqdisk-1.0.9-1.rpm

—install grid
vncviewer #su – grid
su – grid
./runInstaller #Install and Configure Oracle Grid Infrastructure for a Cluster/Advanced Installation/scan-cluster(.localdomain), uncheck configure GNS<grid naming service>

#GRIDDG -> VOL1_VOL2/first node1,then node2

—install RAC(aka RDBMS software)
vncviewer #su – oracle
./runInstaller #install database software only

—create DATA/FLASH disk groups
vncviewer to grid user
su – grid
asmca #create DATA/VOL3, FLASH/VOL4

—create db
vncviewer to oracle user
su – oracle
dbca #Admin-Managed/devdb/select node1,node2/+DATA/+FLASH/AL32UTF8_AL16UTF16, database template in $ORACLE_HOME/assistants/dbca/templates/. DBCA scripts in $ORACLE_BASE/admin/devdb/scripts/(will be removed when removing DB using dbca).

Scripts

1_preuser.sh

#!/bin/bash
#Purpose:Create 6 groups named 'oinstall','dba','asmadmin','asmdba','asmoper','oper', plus 2 users named 'oracle','grid'.
#Also setting the Environment
#variable for oracle user.
#variable for grid user.
#Usage:Log on as the superuser('root'),and then execute the command:#./1preusers.sh
#Author:Asher Huang
echo "Now create 6 groups named 'oinstall','dba','asmadmin','asmdba','asmoper','oper'"
echo "Plus 2 users named 'oracle','grid',Also setting the Environment"
groupadd -g 54321 oinstall
groupadd -g 54324 asmadmin
groupadd -g 54325 asmdba
groupadd -g 54326 asmoper
groupadd -g 54327 dba
groupadd -g 54328 oper
useradd -u 54329 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "grid Infrastructure Owner" grid
echo "grid" | passwd --stdin grid
echo 'export PS1="`/bin/hostname`-> "'>> /home/grid/.bash_profile
echo "export TMP=/tmp">> /home/grid/.bash_profile
echo 'export TMPDIR=$TMP'>>/home/grid/.bash_profile
echo "export ORACLE_BASE=/u01/app/grid">> /home/grid/.bash_profile
echo "export ORACLE_HOME=/u01/app/11.2.0/grid">> /home/grid/.bash_profile
echo "export ORACLE_TERM=xterm">> /home/grid/.bash_profile
echo "export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'" >> /home/grid/.bash_profile
echo 'export TNS_ADMIN=$ORACLE_HOME/network/admin' >> /home/grid/.bash_profile
echo 'export PATH=/usr/sbin:$PATH'>> /home/grid/.bash_profile
echo 'export PATH=$ORACLE_HOME/bin:$PATH'>> /home/grid/.bash_profile
echo 'export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib'>> /home/grid/.bash_profile
echo 'export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib'>> /home/grid/.bash_profile
echo "export EDITOR=vi" >> /home/grid/.bash_profile
echo "export LANG=en_US" >> /home/grid/.bash_profile
echo "export NLS_LANG=american_america.AL32UTF8" >> /home/grid/.bash_profile
echo "umask 022">> /home/grid/.bash_profile

useradd -u 54330 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c “Oracle Software Owner” oracle
echo “oracle” | passwd –stdin oracle
echo ‘export PS1=”`/bin/hostname`-> “‘>> /home/oracle/.bash_profile
echo “export TMP=/tmp”>> /home/oracle/.bash_profile
echo ‘export TMPDIR=$TMP’>>/home/oracle/.bash_profile
echo “export ORACLE_BASE=/u01/app/oracle”>> /home/oracle/.bash_profile
echo ‘export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1′>> /home/oracle/.bash_profile
echo “export ORACLE_UNQNAME=devdb”>> /home/oracle/.bash_profile
echo ‘export TNS_ADMIN=$ORACLE_HOME/network/admin’ >> /home/oracle/.bash_profile
echo “export ORACLE_TERM=xterm”>> /home/oracle/.bash_profile
echo ‘export PATH=/usr/sbin:$PATH’>> /home/oracle/.bash_profile
echo ‘export PATH=$ORACLE_HOME/bin:$PATH’>> /home/oracle/.bash_profile
echo ‘export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib’>> /home/oracle/.bash_profile
echo ‘export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib’>> /home/oracle/.bash_profile
echo “export EDITOR=vi” >> /home/oracle/.bash_profile
echo “export LANG=en_US” >> /home/oracle/.bash_profile
echo “export NLS_LANG=american_america.AL32UTF8″ >> /home/oracle/.bash_profile
echo “export NLS_DATE_FORMAT=’yyyy/mm/dd hh24:mi:ss’” >> /home/oracle/.bash_profile
echo “umask 022″>> /home/oracle/.bash_profile
#should be changed on your env
#echo “export ORACLE_SID=+ASM1″>> /home/grid/.bash_profile
#echo “export ORACLE_SID=+ASM2″>> /home/grid/.bash_profile
#echo “export ORACLE_HOSTNAME=node1.us.test.com”>> /home/oracle/.bash_profile
#echo “export ORACLE_HOSTNAME=node2.us.test.com”>> /home/oracle/.bash_profile
#echo “export ORACLE_SID=devdb1″>> /home/oracle/.bash_profile
#echo “export ORACLE_SID=devdb2″>> /home/oracle/.bash_profile
chown -R oracle:oinstall /home/oracle/
chmod -R 755 /home/oracle/
chown -R grid:oinstall /home/grid
chmod -R 755 /home/grid
echo “The Groups and users has been created”
echo “The Environment for grid,oracle also has been set successfully”

2_predir.sh

#!/bin/bash
#Purpose:Create the necessary directory for oracle,grid users and change the authention to oracle,grid users.
#Usage:Log on as the superuser('root'),and then execute the command:#./2predir.sh
#Author:Asher Huang
echo "Now create the necessary directory for oracle,grid users and change the authention to oracle,grid users..."
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle
chown -R oracle:oinstall /u01
chown -R grid:oinstall /u01/app/grid
chown -R grid:oinstall /u01/app/11.2.0
chmod -R 775 /u01
echo "The necessary directory for oracle,grid users and change the authention to oracle,grid users has been finished"

3_prelimits.sh

#!/bin/bash
#Purpose:Change the /etc/security/limits.conf.
#Usage:Log on as the superuser('root'),and then execute the command:#./3prelimits.sh
#Author:Asher Huang
echo "Now modify the /etc/security/limits.conf,but backup it named /etc/security/limits.conf.bak before"
cp /etc/security/limits.conf /etc/security/limits.conf.bak
echo "oracle soft nproc 2047" >>/etc/security/limits.conf
echo "oracle hard nproc 16384" >>/etc/security/limits.conf
echo "oracle soft nofile 1024" >>/etc/security/limits.conf
echo "oracle hard nofile 65536" >>/etc/security/limits.conf
echo "grid soft nproc 2047" >>/etc/security/limits.conf
echo "grid hard nproc 16384" >>/etc/security/limits.conf
echo "grid soft nofile 1024" >>/etc/security/limits.conf
echo "grid hard nofile 65536" >>/etc/security/limits.conf
echo "Modifing the /etc/security/limits.conf has been succeed."

4_prelogin.sh

#!/bin/bash
#Purpose:Modify the /etc/pam.d/login.
#Usage:Log on as the superuser('root'),and then execute the command:#./4prelimits.sh
#Author:Asher Huang
echo "Now modify the /etc/pam.d/login,but with a backup named /etc/pam.d/login.bak"
cp /etc/pam.d/login /etc/pam.d/login.bak
echo "session required /lib64/security/pam_limits.so" >>/etc/pam.d/login
echo "session required pam_limits.so" >>/etc/pam.d/login
echo "Modifing the /etc/pam.d/login has been succeed."

5_preprofile.sh

#!/bin/bash
#Purpose:Modify the /etc/profile.
#Usage:Log on as the superuser('root'),and then execute the command:#./5preprofile.sh
#Author:Asher Huang
echo "Now modify the /etc/profile,but with a backup named /etc/profile.bak"
cp /etc/profile /etc/profile.bak
echo 'if [ $USER = "oracle" ]||[ $USER = "grid" ]; then' >> /etc/profile
echo 'if [ $SHELL = "/bin/ksh" ]; then' >> /etc/profile
echo 'ulimit -p 16384' >> /etc/profile
echo 'ulimit -n 65536' >> /etc/profile
echo 'else' >> /etc/profile
echo 'ulimit -u 16384 -n 65536' >> /etc/profile
echo 'fi' >> /etc/profile
echo "Modifing the /etc/profile has been succeed."

6_presysctl.sh

#!/bin/bash
#Purpose:Modify the /etc/sysctl.conf.
#Usage:Log on as the superuser('root'),and then execute the command:#./6presysctl.sh
#Author:Asher Huang
echo "Now modify the /etc/sysctl.conf,but with a backup named /etc/sysctl.bak"
cp /etc/sysctl.conf /etc/sysctl.conf.bak
echo "fs.aio-max-nr = 1048576" >> /etc/sysctl.conf
echo "fs.file-max = 6815744" >> /etc/sysctl.conf
echo "kernel.shmall = 2097152" >> /etc/sysctl.conf
echo "kernel.shmmax = 2100549632" >> /etc/sysctl.conf
echo "kernel.shmmni = 4096" >> /etc/sysctl.conf
echo "kernel.sem = 250 32000 100 128" >> /etc/sysctl.conf
echo "net.ipv4.ip_local_port_range = 9000 65500" >> /etc/sysctl.conf
echo "net.core.rmem_default = 262144" >> /etc/sysctl.conf
echo "net.core.rmem_max = 4194304" >> /etc/sysctl.conf
echo "net.core.wmem_default = 262144" >> /etc/sysctl.conf
echo "net.core.wmem_max = 1048586" >> /etc/sysctl.conf
echo "net.ipv4.tcp_wmem = 262144 262144 262144" >> /etc/sysctl.conf
echo "net.ipv4.tcp_rmem = 4194304 4194304 4194304" >> /etc/sysctl.conf
echo "Modifing the /etc/sysctl.conf has been succeed."
echo "Now make the changes take effect....."
sysctl -p

Categories: Databases, Oracle DB Tags:

mysql tips

July 1st, 2013 No comments

Optimization: http://dev.mysql.com/doc/refman/5.1/en/optimization.html
HA: http://dev.mysql.com/doc/refman/5.1/en/ha-overview.html
Replication: http://dev.mysql.com/doc/refman/5.1/en/replication.html
Cluster: http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster.html

log in:
mysql -h localhost -u root -pyourpassword

set password:
set password for ‘root’@’localhost’ = password(’123456′);

create db:
CREATE DATABASE `newone` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;

add privileges:
CREATE USER ‘root’@’192.168.0.0.1′;
GRANT ALL PRIVILEGES ON * . * TO ‘root’@’192.168.0.0.1′ WITH GRANT OPTION MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0 ;
flush privileges;

add user:
CREATE USER ‘hello’@’192.168.0.2′ IDENTIFIED BY ‘***’;
GRANT ALL PRIVILEGES ON * . * TO ‘hello’@’192.168.0.2′ IDENTIFIED BY ‘***’ WITH GRANT OPTION MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0 ;
flush privileges;

check user:
mysql> select Host,User from mysql.user;

export db:
mysqldump -u root -pyourpassword –no-data –default-character-set=utf8 test>aaa.sql #–no-data export only structures

import db:
mysql -h localhost -u root -pyourpassword dbname</root/db.sql

Categories: Databases, MySQL DB, tips Tags:

oracle database tips – management

June 30th, 2013 No comments
###General

dbca
netmgr
netca #su – grid, can change listener port with this
oifcfg #Oracle Interface Configuration Tool, used for adding new public/private interfaces
appvipcfg #appvipcfg create -network=1 -ip 172.17.1.108 -vipname httpd-vip -user=root
lsnrctl #su – grid first; change_password, save_config
asmca
asmcmd -p #asmcmd -p ls -l; ls –permission; lsdg; find -t datafile DATA/ sys*;pwd;lsct<asm client>;help cp

v$parameter(for current session), v$system_parameter(for new sessions), v$spparameter

sqlnet.ora #su – grid, Profiles, define sequence of naming method;access control(through netmgr)

orapwd #sync dictionary/password file after upgrading oracle db
/etc/oratab #which DBs are installed, and control whether dbstart/dbshut is used to start/stop DB

pmon registers to listener, alter system register. PMON worked with dispatcher, shared server architecture; dedicated server architecture(for restricted operations)

DB modes

startup nomount #read spfile or init.ora and start up oracle memory structures/background processes. instance is started but db is not associate with instance. may recreate control files in this mode.
alter database mount #the instance mounts the database. Control files(contains name of datafiles and redo logs) are read, but datafiles and redo logs still not open
startup force #restart instance(first shutdown abort then startup. if not shutdown properly before and cannot startup now)
startup mount

alter database open [read only]; #datafiles and redo logs open, ready for use

startup restrict #only DBA can use the DB
alter system quiesce restrict #The activities of other users continue until they become inactive
alter system unquiesce
shutdown normal(all connections quit)/immediate(rollback first)/transactional(after commit)/abort

SQL> select open_mode from v$database; #read write

segment, extents, blocks

Each segment is a single instance of a table,partition,cluster, index,or temporary or undo segment. So,for example,a table with two indexes is implemented as three segments in the schema.

As data is added to Oracle, it will first fill the blocks in the allocated extents and once those extents are full,new extents can be added to the segment as long as space allows.

EM

emctl status/start/stop dbconsole(management agent)/agent/oms
emca -config dbcontrol db -cluster #EM configuration assistant
emca -reconfig dbcontrol -cluster #reconfigure the Console to start on a different node (or on more than one if you desire)
emca -displayConfig dbcontrol -cluster #current config, Management Server’s location
emca -addInstdb/-deleteInst

###Performance
AWR(Automatic Workload Repository) is

used for storing database statistics that are used for performance tuning). A set of tables is created for the AWR under the SYS schema in the SYSAUX tablespace

MMON(Manageability Monitor) captures base statistics every 60 minutes. snapshots and in memory statistics are called AWR(ADDM runs automatically after each AWR snapshot)

MMNL(Manageability Monitor Light) performing tasks related to the Active Session History (ASH), ASH refresh every second,record what the sessions are waiting for

ADDM(automatic database diagnostic monitor) diagnoses AWR report and suggest potential solutions
SQL>select * from v$sql where cpu_time>200000 #20s
explain plan for select count(*) from lineitem; #and then issue select * from table(DBMS_XPLAN.DISPLAY);
###LOG
show parameter dump_dest; #/u01/app/oracle/diag/rdbms/devdb/devdb1/{trace,alert}
$ORACLE_BASE/cfgtoollogs/dbca #for DBCA trace and log files
$ORACLE_BASE/admin/devdb
adump,  audit files
dpdump, Data Pump Files
hdump, High availability trace files
pfile, Initialization file;
$GRID_HOME/log/hostname is $LOG_HOME
ohasd/ #ohasd.bin’s booting log
acfs/crsd/cssd/evmd/diskmon/gipcd/gnsd/gpnpd/mdnsd/racg/srvm
agent
client
ADR(automatical diagnotics repository)
adrci #su – oracle/grid, manage alert/trace files
select name, value from gv$diag_info; #ADR  directory structure for all instances, e.g. Diag Enabled/ADR Base/ADR Home/Diag Trace/Diag Alert/Health Monitor/Active Problem Count
adrci exec=”show home”;
show home;
set base xxx;<show parameter diagnostic_dest>
set homepath diag/rdbms/orcl6/orcl6;<or export ADR_HOME=’xxx’>
show alert -p “message_text like ‘%start%’”;
show alert -tail 5 -f;
show incident;
ips create package incident <incident id>;
ips generate package 1 in /tmp;
show tracefile;
show tracefile -I <incident id>;

show trace <tracefile name from show tracefile -I, like PROD1_lmhb_7430_i27729.trc>;

HM(health check)
SQL> select name from v$hm_check; #health check names
SQL> exec dbms_hm.run_check(‘DB Structure Integrity Check’);
adrci> show hm_run;  #to check HM details

adrci> create report hm_run <RUN_NAME from show hm_run>;

adrci> show report hm_run <RUN_NAME from show hm_run>;

SQL>  select description, damage_description from v$hm_finding where run_id = 62; #run_id is from show hm_run
DBMS_MONITOR and trace(sql_trace is deprecated)
DBMS_MONITOR traces for a specific session(session_trace_enable or database_trace_enable for all sessions), module(serv_mod_act_trace_enable), action(serv_mod_act_trace_enable), or client identifier

SQL> SELECT sid, serial# FROM v$session WHERE username = ‘TPCC’; # you may need to join V$SESSION to other dynamic performance views, such as V$SQL, to identify the session of interest
SQL> EXECUTE dbms_monitor.session_trace_enable (session_id=>164); #enable tracing for a specific session, the result will be appended to the current trace file
SQL> EXECUTE dbms_monitor.session_trace_disable (session_id=>164); #disable tracing for the session

tkprof #view the activities of a session with finer granularity after trace files generated

###Patch
Patch types
One-off patch/Merge level request patch
CPU(Critical Patch Update, overall release of security fixes each quarter)
PSU(Patch Set Updates, cumulative patches,  low risk and RAC rolling installable, include the latest CPU)
BP(Bundle Patch,  have been replaced by PSU)
PS(Patch Set, minor version upgrade, 11.1.0.6 -> 11.1.0.7)
Major release update(11.1 -> 11.2)
Opatch
opatch query -is_rolling_patch <patch path/> #four modes: all node patch mode/rolling patch mode/minumum downtime patch mode/local patch mode
opatch query <patch path> -all
$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME #ensure inventory is not corrupted and see what patches have been applied #su – oracle/grid
/etc/oraInst.loc #inventory_loc(/u01/app/oraInventory)
/u01/app/oraInventory/ContentsXML/inventory.xml #stores oracle software products & their oracle_homes location
Opatch example
srvctl stop home -o $ORACLE_HOME -s /var/tmp/home_stop.txt -n node1 #su – oracle
srvctl stop home -o $ORACLE_HOME -s /var/tmp/home_stop_grid.txt -n node1 #su – grid
$GRID_HOME/crs/install/rootcrs.pl -unlock -crshome $GRID_HOME #su – root, Unlock CRS home
/u01/app/crs/OPatch/opatch apply #su – grid and patch
$GRID_HOME/crs/install/rootcrs.pl -patch #su – root, boot up local HAS that has been patched
crsctl check crs #check status, or crsctl check cluster -all. -n node1 for single node
srvctl start home -o $ORACLE_HOME -s /var/tmp/home_stop_grid.txt -n node1 #su – grid
srvctl start home -o $ORACLE_HOME -s /var/tmp/home_stop.txt -n node1 #su – oracle
now patch node2
###RAC
clusterware

server pools #Policy-managed databases, add/remove instance to cluster automatically

$GRID_HOME/racg/usrco #server side callouts

alter diskgroup diskgroupName online disk diskName #re-activate asm disk in DISK_REPAIR_TIME

select name,value from v$asm_attribute where group_number=3 and name not like ‘template%’; #disk group attributes

Add RAC nodes
network/user ids/asmlib and asm module/disks
$ORACLE_HOME/bin/cluvfy stage -post hwos -n london4 -verbose #su – grid
cluvfy stage -pre nodeadd -n london4  -fixup -fixupdir /tmp
$GRID_HOME/oui/bin/addNode.sh -silent “CLUSTER_NEW_NODES={london4}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={london4-vip}”  #with or without GNS, then execute scripts prompted
cluvfy stage -post nodeadd -n london4  #done for adding node to grid
$ORACLE_HOME/oui/bin/addNode.sh -silent “CLUSTER_NEW_NODES={london4}” #add RDBMS software, then execute scripts prompted
Remove RAC nodes

ensure that no database instance or other custom resource type uses that node

ocrconfig -manualbackup

dbca #remove database instance from a node.

srvctl config listener -a #detailed listener configuration

srvctl disable listener -l LISTENER -n london2

srvctl stop listener -l LISTENER -n london2

./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES={london2} -local #on node2(to be deleted), pay attention to -local

$ORACLE_HOME/deinstall/deinstall -local #remove RDBMS home. if RDBMS binaries was installed on shared storage, then $ORACLE_HOME/oui/bin/runInstaller -detachHome ORACLE_HOME=$ORACLE_HOME
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={london1}”  #on node1
 $GRID_HOME/crs/install/rootcrs.pl -deconfig –force # update the OCR and remove the node from Grid, more on http://goo.gl/CDnQdZ
crsctl delete node -n london2  #su – root, on node1
/u01/app/crs/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES=london2″ CRS=TRUE -local  #on node2, remove grid software
./deinstall -local

./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME

“CLUSTER_NODES={london1,london3}” CRS=TRUE #on node1

RAC启停

crsctl start crs #su – root, on each node.crsctl start cluster will boot up daemons not running.  -n node1 to specified one node

srvctl start asm -n node1/node2 #su – oracle

srvctl start database -d devdb #su – oracle

ohasd(oracle restart, will respawn) will startup crs(through reading contents of /etc/oracle/scls_scr/<hostname>/root/ohasdrun, more on http://goo.gl/VH4F3T). oracle restart can startup ons/eons daemons: srvctl add/start ons/eons on single instance DB

crs(clusterware OR grid infrastructure) manages asm/db/vip/listener/ons<oracle notification services, which send out FAN(fast application notification) events> cluster resource(startup/stop/monitor/failover).

Oracle Cluster Registry

ocrconfig [-local] -showbackup #su – grid. Configuration tool for, saves info for cluster resource, so that crs will use this for managing cluster resources. OLR(Oracle Local Registry) mainly stores info about OHASD locally.

ocrdump /var/tmp/ocrdump.txt #run as root

ocrcheck #run as grid/oracle/root

restore OLR

ocrconfig -local -restore /u01/app/crs/cdata/london1/backup_20091010_211006.olr #init 2 first. check log in $GRID_HOME/log/hostname/client
#if OLR file is lost
touch /u01/app/crs/cdata/london1.olr #init 2 first
chown grid:oinstall london1.olr
ocrconfig -local -restore /u01/app/crs/cdata/london1/backup_20100625_085111.olr

restore OCR

crsctl stop cluster –all –f # stop the Clusterware stack on all nodes
crsctl start crs –excl # start the Clusterware stack in exclusive mode( start the required background processes that make up a local ASM instance). use crsctl disable crs to disable the automatic starting of the Clusterware stack. reboot the node. Then crsctl start crs –excl again.
If the diskgroup containing the voting files was lost # create a new one with exactly the same name and mount it. set the ASM compatibility to 11.2 for the diskgroup(alter diskgroup OCRVOTE set attribute ‘compatible.asm’=’11.2′). execute /etc/init.d/oracleasm scandisks on the nodes for which you did not create the disk
ocrconfig -restore backup_file_name #log files under $GRID_HOME/log/hostname/client. if errors saying crs is running, then crsctl stop res ora.crsd –init
crsctl stop crs #at this time, ohasdrun’s content will be ‘stop’, although init.ohasd is still running
crsctl start crs
Start the cluster on the remaining nodes. If you have disabled the automatic start of the Clusterware stack in Step 2, re-enable it using crsctl enable crs

ocssd #cluster synchronization server

control membership of RAC nodes(join/leave). update voting disk every second with the status of node itself. can work with thirdparty HA like IBM hacmp except for clusterware

GPnP #grid plug and play

gpnptool get #su – grid first, info about public/private NIC/voting disk path can be seen

CTSS #Cluster time synchronization (CTSS)

observer<if ntp>/active mode<if not ntp>

GNS #grid naming service

manages VIP instead of using name server

mDNS, multicast DNS

restore voting disk

if OCR corrupted, then recover OCR first

crsctl stop cluster –all –f # stop the Clusterware stack on all nodes
crsctl start crs –excl # start the Clusterware stack in exclusive mode( start the required background processes that make up a local ASM instance). use crsctl disable crs to disable the automatic starting of the Clusterware stack. reboot the node. Then crsctl start crs –excl again.
If the diskgroup containing the voting files was lost # create a new one with exactly the same name and mount it. set the ASM compatibility to 11.2 for the diskgroup(alter diskgroup OCRVOTE set attribute ‘compatible.asm’=’11.2′). execute /etc/init.d/oracleasm scandisks on the nodes for which you did not create the disk. compatible.rdbms = 11.2  access_control.enabled = true
crsctl replace votedisk + disk_group_name
crsctl stop crs
crsctl start crs
Start the cluster on the remaining nodes. If you have disabled the automatic start of the Clusterware stack in Step 2, re-enable it using crsctl enable crs

ASM permission

alter diskgroup data set ownership owner = ‘orareport’ for file ‘+DATA/PROD/DATAFILE/users.259.679156903′;

alter diskgroup data set permission owner = read write, group = read only, other = none  for file ‘+DATA/FIN/datafile/example.279.723575017′;

select name, permissions,user_number,usergroup_number from v$asm_file f natural join v$asm_alias t where group_number = 3 and name = ‘EXAMPLE.279.723575017′ ;

asm template

select redundancy,stripe,name,primary_region,mirror_region from v$asm_template  where group_number = 3; #stripe has coarse/fine types.variable extent sizes can be used for coarse striping, the first 20,000 extents always equal the allocation unit (AU) size. The next 20,000 extents are 4 times the size of the AU. fine striping is used only for control files, online redo logs, and flashback logs. The stripe size is 128KB. Also by default, eight stripes are created for each file; therefore, the optimum number of disks in a disk group is a multiple of eight.

alter diskgroup data add template allhot attributes (hot);
create tablespace hottbs datafile ‘+DATA(allhot)’ size 10M;
select bytes,type,redundancy,primary_region,mirror_region, hot_reads,hot_writes,cold_reads,cold_writes from v$asm_file where file_number = 284;

add/remove asm disk

For device-mapper-multipath

1.format the underlying block device, usually /dev/sd* on one node

2.With the device partitioned, the administrator can use either partprobe or kpartx to re-read the partition table on the other cluster nodes

3.A restart of the multipath daemon (“service multipathd restart”) should show the new partition in /dev/mapper(may use multipath -f first)

4.With the new block devices detected on all nodes, you could use ASMLib to mark the disk as an ASM disk on one node of the cluster

alter diskgroup data drop disk ‘DATA11′ rebalance power 3;

alter DISKGROUP DGNORM1 add DISK ‘/dev/rdsk/disk5′ name disk5, ‘/dev/rdsk/disk6′ name disk6;
alter diskgroup DG_DEST_DF add FAILGROUP FailgroupB disk ‘/asmdisks/asmdiskB48′;
select group_number,name from v$asm_diskgroup;
select DISK_NUMBER, name, failgroup, group_number from v$asm_disk where group_number=3 order by name; #or order by 2
alter diskgroup DG_DEST_DF add FAILGROUP FailgroupA disk ‘/asmdisks/asmdiskA28′;
alter diskgroup DG_SRC_DF drop disk asmdiskA28;

Then  check V$ASM_OPERATION for information about a time remaining estimate and later physically remove the disk(It is safe to remove the ASM disk physically from the cluster only when the HEADER_STATUS of the V$ASM_DISK view shows “FORMER” for the disk you dropped.)

SQL> alter diskgroup data add disk ‘ORCL:NEWSAN01′, ‘ORCL:NEWSAN02′, ‘ORCL:NEWSAN03′,
drop disk ‘ORCL:OLDSAN01′, ‘ORCL:NEWSAN02′, ‘ORCL:NEWSAN03′ rebalance power 11; #parallel and one ebalance

kfed read /dev/oracleasm/disks/VOL1 #show ASM header info, su – grid

Startup status

STARTUP FORCE/nomount(Starts the ASM instance but does not mount any disk groups)

mount<or open,mounts all disks registered in the ASM_DISKGROUPS initialization parameter>

ASM operations

show parameter ASM_DISKGROUPS;
select name,state from v$asm_diskgroup ; #su – grid

SQL> select * from v$asm_disks;
SQL> select type from V$ASM_FILE group by TYPE #having amount > 300

select db_name,status,instance_name from v$asm_client; #connected clients. when OCR is stored in ASM, the asm instance itself will be its client

alter diskgroup DATA check;  #repair

srvctl start diskgroup -g diskgroupName #rather than using ‘alter diskgroup mount all’

/etc/init.d/oracleasm createdisk VOL4 /dev/loop1 #dd and losetup first

create diskgroup FILEDG external redundancy disk ‘ORCL:VOL4′;

SQL> show parameter asm_diskstring; #search path for candidate disks

SQL> create diskgroup DGEXT1 external redundancy disk ‘/dev/rdsk1/disk1′;
SQL> create diskgroup DGNORM1 normal redundancy disk FAILGROUP controller1 DISK ‘/dev/rdsk/disk1′ name disk1, ‘/dev/rdsk/disk2′ name disk2 FAILGROUP controller2 DISK ‘/dev/rdsk/disk3′ name disk3, ‘/dev/rdsk/disk4′ name disk4; #mirroring based on AU(1M or Exadata’s 4M). In Normal,one mirror for each extent(2 fail groups at least); In high, two mirrors for each extent(3 fail groups at least). For each extent written to disk, another extent will be written into another failure group to provide redundancy.

asm options

compatible.asm, compatible.rdbms, compatible.advm, au_size, sector_size, disk_repair_time, access_control.enabled, access_control.umask

CRSCTL resource

crsctl status resource -t #crsctl status resource -h for help. crs_stat -t

crsctl status resource -t -init #check status of ohasd stack

crsctl status resource ora.devdb.db -p #get detail info about one resource(for example dependency relationships), resource profile

crsctl add resource TEST.db -type cluster_resource -file TEST.db.config #register the resource in Grid. TEST.db.config is output from -p

crsctl getperm resource TEST.db  #resource permission

crsctl setperm resource TEST.db -o oracle #oracle can startup this resource after this

crsctl delete resource ora.gsd #crsctl start/stop resource <resource name>

/u01/app/11.2.0/grid/bin/scriptagent #used to protect user defined resources, start/stop/check/clean/abort/<relocate>(combined with user action scripts)

SRVCTL scan

srvctl config scan #get scan info

srvctl status database -d devdb ##instance running status

srvctl config scan_listener #srvctl stop scan_listener , srvctl modify scan_listener -p  1526, change scan listener Endpoint(port); srvctl start scan_listener ; srvctl status scan_listener

show parameter local_listener/remote_listener;

#add scan ip

srvctl stop scan_listener
srvctl stop scan #scan vip
srvctl status scan_listener
srvctl status scan
srvctl modify scan -n cluster1.example.com #su – root
srvctl config scan
srvctl modify scan_listener -u #Update SCAN listeners to match the number of SCAN VIPs
srvctl config scan_listener
srvctl start scan
srvctl start scan_listener

SRVCTL service

srvctl add service –d PROD –s REPORTING –r PROD3 –a PROD1 –P BASIC –e SESSION # add a service named reporting to your four-node administrator-managed database that normally uses the third node, but can alternatively run on the first node
srvctl start service –d PROD –s REPORTING –i PROD1
srvctl config service -d PROD -s reporting #check status
srvctl status service -d PROD -s reporting
srvctl relocate service -d PROD -s reporting -i PROD1 -t PROD3 #different parameters for admin managed/policy managed

srvctl_1

srvctl_2

###Backup and Recovery
datapump

imp/exp scott/tiger file=scott.exp #logical export, client based. run @?/rdbms/admin/catexp.sql first
sql*loader(non-oracle DB) #sqlldr
expdp, impdp #data pump, can not be used when db is read only
expdp/impdp help=y
SQL>create directory backup_dir as ‘/backup/’; #default is dpump_dir
SQL>grant read,write on directory backup_dir to scott;
expdp scott/tiger dumpfile=scott.dmp directory=backup_dir (tables=scott.emp); #imp_full_database role
expdp \”/ as sysdba \” schemas=scott dumpfile=scott.dmp directory=backup_dir; #full/schemas/tables/tablespaces/transport_tablespaces<only metadata>
expdp system/manager DUMPFILE=expdat.dmp FULL=y LOGFILE=export.log COMPRESSION=ALL
expdp \”/ as sysdba\” schemas=SH ESTIMATE_ONLY=y ESTIMATE=BLOCKS;
expdp sh/sh parfile=exp.par
DIRECTORY=DPUMP_DIR
DUMPFILE=testsh.dmp
CONTENT=DATA_ONLY
EXCLUDE=TABLE:”in (‘PROMOTIONS’)”
QUERY=customers:”where cust_id=1″
create user “SHNEW” identified by “Testpass”;
grant CREATE SESSION to SHNEW;
impdp \”/ as sysdba\” dumpfile=sh.dmp directory=backup_dir SCHEMAS=SH REMAP_SCHEMA=SH:SHNEW
impdp \”/ as sysdba\” dumpfile=sh.dmp directory=backup_dir REMAP_SCHEMA=SH:SHNEW TABLES=SH.PRODUCTS TABLE_EXISTS_ACTION=SKIP

undo

all use space in undotbs<id>
before submit, the status is active, and use a small portion of expired data, some unexpired will become expired. most of the space used will be the free space in undo tablespace
after submit, active will become unexpired, and expired won’t change
SQL>select status,sum(bytes/1024/1024) from dba_undo_extents where tablespace_name=’UNDOTBS1′ group by status;
SQL>select tablespace_name,sum(bytes/1024/1024) from dba_free_space where tablespace_name=’UNDOTBS1′ group by tablespace_name; #undo free space, dba_data_files
undo data is subsequently logged as redo logs(when commit, lgw0 will write data into redo log; then the changes to the data files are in the buffer and can be written out at a later time<dbw0 write to datafile>)

SQL>show parameter undo_retention;
guarantee 900 seconds of  data(may exceed 900).

if auto-extend and tablespace is used up,autoextend;if reached maxsize,then unexpired will be override(only when nogurantee; sql will fail if is  guarantee)
if fixed size, undo_retention is ignored, oracle will change maximum time automatically

noguarentee #default, unexpired can also be overrided, so may not guarantee <undo_retention> seconds of read consistency
SQL>alter tablespace xxx retention guarantee #to make sure long-running queries will succeed. if tablespace is used up, then sql will fail
SQL>select tablespace_name, retention from dba_tablespaces; #guarantee or noguarantee
SQL>select to_char(begin_time, ‘DD-MON-RR HH24:MI’) begin_time, to_char(end_time, ‘DD-MON-RR HH24:MI’) end_time, tuned_undoretention from v$undostat order by end_time; #retention time calculated by system every 10 minutes; data more than 4 days ago are stored in BA_HIST_UNDOSTAT

rollback #delete from …; rollback

SQL>flashback table emp to before drop;

redo
as opposite to undo, redo is for recovering already commited data
RMAN enable archivelog mode

shutdown immediate;
startup mount
back up your database
RMAN> backup as copy database tag=”db_cold_090721″;
RMAN> list copy;
Update init params as needed:
log_archive_dest || db_recovery_file_dest
log_archive_dest_n || db_recovery_file_dest
log_archive_format
alter database archivelog NOTE: apparently, you can only do this from sqlplus, not rman
alter database open

SQL> select LOG_MODE from v$database;
SQL> archive log list #su – oracle. some configurations, and whether archive log mode is enabled or not; archive log destination

RMAN & cold backup

RMAN Offline backup: This performs an immediate or normal shutdown, followed by a startup mount. This does not require archivelog mode.
RMAN Online backup:For this to be done, the database must be open and in archivelog mode.

startup nomount/restore control files/mount/recover/open
SQL>recover database until cancal; #until change 1234567; until time ’2004-04-15:14:33:00′, recovery from a hot backup.
alter database open resetlogs #restore using control file/will reset log sequence number to 1<v$log, v$logfile> and throw away all archive log

alter tablespace <name> begin backup #or alter database begin backup
[cp or ocopy (windows)] #no need to backup online redo logs(but you should archive the current redo logs and back those up)
alter tablespace <name> end backup #or alter database end backup, select * from v$backup<scn>

using recovery catalog instead of target database control file

SQL>create tablespace tbs_rman datafile ‘+DATA’ size 200m autoextend on;
SQL>create user rman identified by rman temporary tablespace temp default tablespace tbs_rman quota unlimited on tbs_rman;
SQL>grant recovery_catalog_owner to rman;
RMAN> connect catalog rman/rman@devdb –connect to recovery catalog
RMAN> create catalog tablespace tbs_rman; –create recovery catalog
rman target sys/Te\$tpass@devdb catalog rman/rman@devdb #connect to target database and recovery catalog
RMAN > connect catalog rman/rman@devdb
RMAN > connect target sys/Te\$tpass@devdb
RMAN> register database; #register target database to recovery catalog

Commands frequently used

rman target /
RMAN> show all; #rman configurations
rman> backup current controlfile;
RMAN> backup database plus archivelog;
SQL> alter database backup controlfile to ‘/u01/app/oracle/control.bak’ [reuse]; #binarybackup, reuse for override
SQL> alter database backup controlfile to trace as ‘/u01/app/oracle/control.trace’; #text backup
rman> backup tablespace new_tbs; #then datafile lost and error occurs while querying/create table
SQL>col NAME for a60
SQL> select FILE#, STATUS, NAME from v$datafile;
rman>restore datafile <10>;
rman>recover datafile 10(apply redo log);
SQL>alter tablespace new_tbs online;
RMAN>crosscheck backup; #checks that the RMAN catalog is in sync with the backup files on disk or the media management catalog. Missing backups will be marked as “Expired.”

alter system set max_dump_file_size=1000 scope=both;
show parameter log_archive_duplex_dest; #not have a single point of failure
show parameter archive_dest;
select dest_name,status,destination from V$ARCHIVE_DEST;

SQL> show parameter log_archive_format;
SQL> alter system set log_archive_dest_1=’location=+DA_SLCM07′ scope=both; #After this, you can remove old archive log on the filesystem(usually under $ORACLE_HOME/dbs). more on http://goo.gl/kNdX91 and http://goo.gl/OcviNx

SQL> alter system set log_archive_dest_1 = ‘LOCATION=USE_DB_RECOVERY_FILE_DEST’; #use setting of DB_RECOVERY_FILE_DEST
SQL> show parameter db_recovery_file_dest #alter system set db_recovery_file_dest_size=9G scope=both; #startup mount first. <Flash Recovery Area>

RMAN> connect target sys/Testpass;
RMAN> delete noprompt ARCHIVELOG UNTIL TIME ‘sysdate-4′;
RMAN>DELETE noprompt BACKUP COMPLETED BEFORE ‘sysdate-2′;

RMAN> list incarnation; #version
RMAN> crosscheck copy;
RMAN> delete expired copy; –remove expired copy, delete archivelog all is for remove all
RMAN> resync catalog
RMAN> list backup summary;
RMAN> list backup by file; #list backup sets and their detailed files and pieces
RMAN> report obsolete; # report on backups that are no longer needed because they exceed the retention policy
RMAN> restore database preview summary; #preview the restore and see the summary for the restore
RMAN> list backupset tag=tbs;
RMAN> configure controlfile autobackup; #backup control file automatically
RMAN> configure retention policy to redundancy 4; #to start deleting backups after four backups have been taken
RMAN> configure retention policy to recovery window of 15 days; # make point-in-time recovery possible up to the last 15 days and to make backups taken more than 14 days ago obsolete
RMAN> host ‘echo “start `date`”‘;

RMAN > validate database;
RMAN > validate backupset 7;
RMAN > validate datafile 10;
RMAN > backup validate database archive log all;
RMAN > restore database validate;
RMAN > restore archive log all validate;

SQL> select name,completion_time from v$archived_log;

RMAN> crosscheck archivelog all;
RMAN> sql ‘alter system switch logfile’; #forcely write redo log to disk(only for current thread<instance>)

SQL> alter system archive log current #This command ensures that all redo logs have been archived and it will wait for the archive to complete(on RAC, best practise. slow than switch logfile). later dbwr will write checkpoint(SGA dirty data) to datafile/control file, and update SCN
RMAN> list archivelog all;
SQL>select dbms_flashback.get_system_change_number from dual; #current SCN. select CURRENT_SCN from v$database;

rman>list failure; #Data recovery advisor, not supported on RAC
rman>advise failure <362> details;
rman>repair failure;

alter system checkpoint #sync redo and datafile, http://www.orafaq.com/wiki/Checkpoint
drop logfile group n #remove redo log

Backup policies

backup policies

DataGuard

SQL> alter database set standby database to maximize protection/availability/performance; #data guard modes
SQL> select force_logging from v$database;
alter database [NO] force logging; #forces all changes to be logged even if nologging. NOLOGGING/LOGGING(默认)/FORCE LOGGING
ALTER TABLESPACE tablespace_name [no] FORCE LOGGING;
SQL> select tablespace_name,logging,force_logging from dba_tablespaces;
select table_name,logging from user_tables; #object level
alter table tb_a nologging;

RMAN scripts

RMAN> run{ #full backup
2> allocate channel ch1 device type disk;
3> backup as compressed backupset
4> database plus archivelog delete input #remove archived log after backed up
5> format=’/u01/app/oracle/whole_%d_%U’
6> tag=’whole_bak’;
7> release channel ch1;}

RMAN> run{ #0-level incremental backup(differential by default)
2> allocate channel ch1 device type disk;
3> allocate channel ch2 device type disk;
4> backup as compressed backupset
5> incremental level 0
6> database plus archivelog delete input
7> format=’/u01/app/oracle/inc_0_%d_%U’
8> tag=’Inc_0′;
9> release channel ch1;
10> release channel ch2;}

RMAN> run{ #1-level  incremental backup(differential by default)
2> allocate channel ch1 device type disk;
3> allocate channel ch2 device type disk;
4> backup as compressed backupset
5> incremental level 1 database
6> format=’/u01/app/oracle/Inc_1_%d_%U’
7> tag=’Inc_1′;
8> release channel ch1;
9> release channel ch2;}

RMAN> run{ #1-level  incremental backup(accumulative)
2> allocate channel ch1 device type disk;
3> backup as compressed backupset
4> incremental level 1 cumulative database
5> format ‘/u01/app/oracle/Cum_1_%d_%U’
6> tag=’Cul_1′;
7> release channel ch1;}

RMAN> run{ #backup tablespaces
2> allocate channel ch1 device type disk;
3> backup as compressed backupset
4> tablespace EXAMPLE,USERS
5> format=’/u01/app/oracle/tbs_%d_%U’
6> tag=’tbs’;}

RMAN> run{ #backup datafile
2> allocate channel ch1 device type disk;
3> backup as compressed backupset
4> datafile 3
5> format=’/u01/app/oracle/df_%d_%U’
6> tag=’df’;
7> release channel ch1;}

RMAN> run{ #backup archived logs through SCN
2> allocate channel ch1 device type disk;
3> backup as compressed backupset
4> archivelog from scn 9214472
5> format=’/u01/app/oracle/arc_%d_%U’
6> tag=’arc’;
7> release channel ch1;}

RMAN> run{ #Image Copy backup
2> allocate channel ch1 device type disk;
3> backup as copy datafile 1,4
4> format ‘/u01/app/oracle/df_2_%d_%U’
5> tag ‘copyback’;
6> release channel ch1;}

RMAN > run { allocate channel c1 type disk; #Image Copy backup
RMAN > copy datafile 1 to ‘/u01/back/system.dbf’;}
replace script BackupTEST1 { #scripts are stored in catalog
configure backup optimization on;
configure channel device type disk;
sql ‘alter system archive log current’;
backup database incremental 2 cumulative database;
release channel d1;
}
run {execute script BackupTEST1;}

MAN> replace script fullRestoreTEST1 { #recover
allocate channel ch1 type disk;
# Set a new location for logs
set archivelog destination to ‘/TD70/sandbox/TEST1/arch’;
startup nomount;
restore controlfile;
alter database mount;
restore database; #file level restore. restore database/tablespace/datafile/controlfile/archivelog
recover database; #data level recover, applying redo log, and keep SCN consistent. recover database/tablespace/datafile
alter database open resetlogs;
release channel ch1;
}
host ‘echo “start `date`”‘;
run {execute script fullRestoreTEST1;}
host ‘echo “stop `date`”‘;
exit

###oracle SQL tips is here http://goo.gl/moTLnJ
Categories: Oracle DB, tips Tags:

oracle database installation

June 30th, 2013 No comments

yum -y install libaio-devel sysstat pdksh.x86_64 compat-libstdc++* unixODBC unixODBC-devel
./jdk-6u38-linux-x64-rpm.bin
cat /proc/sys/net/ipv4/ip_local_port_range #Tcp/ip ephemeral port range(starting point bigger than 9000)
cd database/rpm/
rpm -e cvuqdisk
groupadd oinstall;groupadd dba;groupadd oper #for oracle grid: groupadd asmdba;groupadd asmadmin;groupadd asmoper
usermod -g oinstall oracle #oinstall, Oracle Inventory group; grid, Oracle Restart owner user;oracle, database software owner;Groups => OSDBA(dba), OSASM(asmadmin), OSOPER(oper); Privileges => SYSDBA, SYSASM, SYSOPER
useradd oracle -g oinstall
usermod -a -G dba,oper oracle
useradd grid -g oinstall
usermod -a -G asmadmin,asmdba,dba,oper grid
[root@test-centos database]# id oracle
uid=501(oracle) gid=503(oinstall) groups=503(oinstall),502(dba),504(oper)

[root@test-centos database]# id grid
uid=502(grid) gid=503(oinstall) groups=503(oinstall),502(dba),505(asmdba),506(asmadmin)

CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
rpm -ivh cvuqdisk-1.0.7-1.rpm #Cluster Verification Utility, shared disk etc

#check resource limits(for each software installation users: oracle, grid)
ulimit -Sn #nofile, file descriptor setting(soft) >1024
ulimit -Sn #nofile, file descriptor setting(hard) >65536
ulimit -Su #nproc, processes available(soft) >2047
ulimit -Hu #nproc, processes available(hard) >16384
ulimit -Ss #stack, stack setting(soft) >10240KB
ulimit -Hs #stack, stack setting(hard) >32768KB

#/etc/security/limits.conf, then logout/login again
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240

grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240

#kernel parameters

http://docs.oracle.com/cd/E11882_01/install.112/e24321/pre_install.htm#autoId50

/sbin/sysctl -a | grep sem #semmsl, semmns, semopm, and semmni, /proc/sys/kernel/sem: the value of the semaphore parameters
/sbin/sysctl -a | grep shm #shmall, shmmax, and shmmni, /proc/sys/kernel/shm{all, max, mni}: shared memory segment sizes
/sbin/sysctl -a | grep file-max #file-max, /proc/sys/fs/file-max: maximum number of file handles
/sbin/sysctl -a | grep ip_local_port_range, /proc/sys/net/ipv4/ip_local_port_range
rmem_{default, max}, wmem_{default, max} #/sbin/sysctl -a | grep rmem_, /proc/sys/net/core/rmem_default
/sbin/sysctl -a | grep aio-max-nr #/proc/sys/fs/aio-max-nr

#/etc/sysctl.conf, sysctl -p
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
#mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle
mkdir -p /u01/app/oraInventory
chown -R oracle:oinstall /u01/app/oracle
chown -R grid:oinstall /u01/app/grid
chown -R grid:oinstall /u01/app/oraInventory/
#chmod -R 775 /u01/app/grid
chmod -R 775 /u01/app/oracle
chmod -R 777 /u01/app/oraInventory/
export ORACLE_BASE=/u01/app/oracle/

#edit /etc/udev/rules.d/99-oracle.rules if using a new partition for oracle.After this,/sbin/partprobe devicename to loadupdated block device partition tables(restart udev). grid asm need at least two disks to form a new disk group
KERNEL==”sdb1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sdc1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″

KERNEL==”sdb2″, OWNER=”oracle”, GROUP=”dba”, MODE=”0660″
KERNEL==”sdc2″, OWNER=”oracle”, GROUP=”dba”, MODE=”0660″

PS: More about writing udev rules here http://www.reactivated.net/writing_udev_rules.html

/sbin/partprobe /dev/sdb
/sbin/partprobe /dev/sdc

#install vncviewer
yum grouplist
yum groupinstall “X Window System” -y
yum groupinstall “GNOME Desktop Environment” -y
yum groupinstall “Graphical Internet” -y
yum groupinstall “Graphics” -y
yum install vnc-server
echo “DESKTOP=”GNOME”” > /etc/sysconfig/desktop
sed -i.bak ‘/VNCSERVERS=/d’ /etc/sysconfig/vncservers
echo “VNCSERVERS=\”1:root 2:oracle 3:sysdba\”” >> /etc/sysconfig/vncservers

mkdir -p /root/.vnc
wget –no-proxy http://test/iso/HostConfiguration/vnc/passwd -P /root/.vnc/
wget –no-proxy http://test/iso/HostConfiguration/vnc/xstartup -P /root/.vnc/
chmod 755 ~/.vnc ; chmod 600 ~/.vnc/passwd ; chmod 755 ~/.vnc/xstartup
[root@test-centos .vnc]# cat xstartup
#!/bin/sh

# Uncomment the following two lines for normal desktop:
# unset SESSION_MANAGER
# exec /etc/X11/xinit/xinitrc

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
#xterm -geometry 80×24+10+10 -ls -title “$VNCDESKTOP Desktop” &
#twm &
gnome-terminal &
gnome-session &

PS: you can download passwd and xstartup here vnc-config-passwd (wel*1). Or you can set password through use vncpasswd ~/.vnc/passwd

mkdir -p ~oracle/.vnc
wget –no-proxy http://test/iso/HostConfiguration/vnc/passwd -P ~oracle/.vnc/
wget –no-proxy http://test/iso/HostConfiguration/vnc/xstartup -P ~oracle/.vnc/
chown oracle:oinstall -R ~oracle/.vnc ; chmod 755 ~oracle/.vnc ; chmod 600 ~oracle/.vnc/passwd ; chmod 755 ~oracle/.vnc/xstartup
chkconfig –level 345 vncserver on
chkconfig –list | grep vncserver
service vncserver start

PS: You can change vncserver’s resolution through editing /usr/bin/vncserver, change the default $geometry = “1024×768″ to any one you like, for example $geometry = “1600×900″. You can also control each user’s vnc resolution setting through adding line like “VNCSERVERARGS[1]=”-geometry 1600×900″” in /etc/sysconfig/vncservers

#oracle env variables http://docs.oracle.com/cd/E11882_01/install.112/e24321/pre_install.htm#autoId68
[root@test-centos ~]# su – oracle

-bash-3.2$ cat .bash_profile
export ORACLE_HOME=”/u01/app/oracle/product/11.2.0/dbhome_6″
export ORACLE_SID=”orcl6″

##begin installation
##first install oracle grid infrastructure & ASM http://docs.oracle.com/cd/E11882_01/install.112/e24321/oraclerestart.htm#BABCHCBG
##log file under /u01/app/oraInventory/logs/installActions*
chmod -R 777 database*
su – grid
cd /backup/downloads/database-grid/grid
./runInstaller
Specify ASM Password #Use same passwords
Privileged OS Groups #OSDBA -> dba; OSOPER -> oper; OSASM -> asmadmin
oracle base #/u01/app/grid
Software Location #/u01/app/grid/product/11.2.0/grid
Inventory Directory #/u01/app/oraInventory
Save response file #/home/grid/grid.rsp
#Oracle Grid Infrastructure home => /u01/app/grid/product/11.2.0/grid/

/u01/app/grid/product/11.2.0/grid/bin/crsctl check has #verify that the Oracle High Availability Service is installed properly
ps aux|grep -i Ohasd #Ohasd is a daemon installed with Oracle Grid Infrastructure that starts software services, such as Oracle ASM
#test the Oracle ASM installation
#ASM instance sid<system identifier> => cat /etc/oratab(also used to control whether to boot with OS)
export ORACLE_SID=+ASM
export ORACLE_HOME=/u01/app/grid/product/11.2.0/grid
$ORACLE_HOME/bin/asmcmd lsdg
#If the Oracle ASM instance is not running, you can start the instance with the following
#$ORACLE_HOME/bin/asmcmd startup
#or through sqlplus to startup asm instance
$ORACLE_HOME/bin/sqlplus /nolog
SQL> CONNECT SYS as SYSASM
Enter password: SYS_password
SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
SQL> STARTUP

##log under /u01/app/oraInventory/logs/installActions*
##/u01/app/oracle/cfgtoollogs/dbca/orcl6/
export ORACLE_SID=+ASM
export ORACLE_HOME=/u01/app/grid/product/11.2.0/grid
$ORACLE_HOME/bin/sqlplus /nolog
SQL> CONNECT SYS as SYSASM
SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
#Enhanced security controls such as database auditing options, and password policy and expiration settings
#Database Security Options
#To enable the security configuration, after db installation:
dbca -silent -configureDatabase -sourceDB SID -disableSecurityConfiguration NONE -enableSecurityConfiguration true
#To disable the security configuration, after db installation:
dbca -silent -configureDatabase -sourceDB SID -disableSecurityConfiguration [ALL|PASSWORD_PROFILE] -enableSecurityConfiguration false
su – oracle
cd /backup/downloads/database
./runInstaller
oracle base => /u01/app/oracle
oracle home directory => /u01/app/oracle/product/11.2.0/dbhome_6
global database name => orcl6.andy
Oracle service identifier => orcl6
Character sets => AL32UTF8
Security => choose “Assert all new security settings”
choose “create database with sample schemas”
Do not enable automated backups

Server Parameter File name +DATA/orcl6/spfileorcl6.ora
Enterprise Manager Database Control URL: https://10.172.12.177:1158/em

#recompiles all PL/SQL modules that might be in an invalid state(including packages, procedures, and types)
su – oracle
$ORACLE_HOME/bin/sqlplus / as sysdba
SQL> @?/rdbms/admin/utlrp.sql

#Generating the Client Static Library
su – oracle
$ORACLE_HOME/bin/genclntst

#create a fast recovery area disk group(large enough to hold all of your data files and control files, the online redo logs, and the archived redo log files: Multiple databases can use the same fast recovery area)
vncviewer
GRID_HOME/bin/asmca

#Configuring Oracle Net Services(listener.ora/tnsnames.ora)
su – oracle
/u01/app/grid/product/11.2.0/grid/bin/tnsping orcl6

#install windows oracle 11g client

PS:

1.More about oracle rac installation on linux here http://www.oracleonlinux.cn/?s=%E4%B8%80%E6%AD%A5%E4%B8%80%E6%AD%A5%E5%9C%A8Linux%E4%B8%8A%E5%AE%89%E8%A3%85 (it’s 32bits, change for example /lib/security/pam_limits.so to /lib64/security/pam_limits.so)

2.PDF files of above link oracle_on_linux.zip

Categories: Oracle DB Tags:

How HA is achived in Oracle Exadata

November 27th, 2012 No comments
  1. Each Exadata Database Machine has completely redundant hardware including redundant InfiniBand networking, redundant Power Distribution Units (PDU), redundant power upplies, and redundant database and storage servers.
  2. Oracle RAC protects against database server failure.
  3. ASM provides data mirroring to protect against disk or storage server failures.
  4. Oracle RMAN provides extremely fast and efficient backups to disk or tape.
  5. Oracle’s Flashback technology allows backing out user errors at the database, table or even row level.
  6. Using Oracle Data Guard, a second Exadata Database Machine can be configured to maintain a real-time copy of the database at a remote site to provide full protection against site failures and disasters.

change mysql proxy agent max-open-files

September 24th, 2012 No comments

Change Step details
==============
1. Stop both the mysql service and mysql proxy agent on the SLAVE mysql host  using the startup script command:
sudo /etc/init.d/mysql stop
2. Change the user context to mysql user with the command:
sudo su – mysql
3. Update the mysql proxy agent configuration file “/apps/mysql/agent/etc/mysql-monitor-agent.ini”, add a new line like below at the end of the file:
max-open-files = 8192
4. Under ROOT user context, update the mysql stratup script file “/etc/init.d/mysql”, replace all the existing content there with the content below:
#!/bin/ksh
# chkconfig: 345 90 02
# description: MySQL startup
/apps/mysql/install/support-files/mysql.server $*
/apps/mysql/agent/etc/init.d/mysql-monitor-agent $*

5. Under ROOT user context, execute the command:
chkconfig –add mysql
6. Start both the mysql service and mysql proxy agent on the SLAVE mysql host using the startup script command:
sudo /etc/init.d/mysql start
7. Repeat the step#1 to #7 on the other MASTER mysql host

Categories: Databases, MySQL DB Tags:

resolved – semget failed with status 28 failed oracle database starting up

August 2nd, 2012 No comments

Today we met a problem with semaphore and unable to start oracle instances. Here’s the error message:

ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed with status: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpcreates

So it turns out, the max number of arrays have been reached:
#check limits of all IPC
root@doxer# ipcs -al

—— Shared Memory Limits ——–
max number of segments = 4096
max seg size (kbytes) = 67108864
max total shared memory (kbytes) = 17179869184
min seg size (bytes) = 1

—— Semaphore Limits ——–
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 1024000
max ops per semop call = 100
semaphore max value = 32767

—— Messages: Limits ——–
max queues system wide = 16
max size of message (bytes) = 65536
default max size of queue (bytes) = 65536

#check summary of semaphores
root@doxer# ipcs -su

—— Semaphore Status ——–
used arrays = 127
allocated semaphores = 16890

To resolve this, we need increase value of max number of semaphore arrays:

root@doxer# cat /proc/sys/kernel/sem
250 1024000 100 128
                 ^---needs to be increased

PS:

Here’s an example with toilets that describes differences between mutex and semaphore LOL http://koti.mbnet.fi/niclasw/MutexSemaphore.html

Categories: Kernel, Oracle DB Tags:

oracle RMAN backups and hot backup mode

July 28th, 2012 No comments

In one sentence, to backup Oracle with OS(BCV for example), database should be put into hot backup mode. But RMAN backups can be performed while the database is online.

Also, oracle GoldenGate is used to replicate DB between heterogeneous systems, for example, oracle replicated to mysql/sql server etc.

oracle golden gate documentation

July 28th, 2012 No comments

Here’s some excerpts from oracle document about oracle golden gate:

Robust Modular Architecture

The Oracle GoldenGate software architecture is comprised of three primary components:
Capture, Trail Files, and Delivery. This modular approach allows each component to perform
its tasks independently of the others, accelerating data replication and ensuring data integrity.
Figure 1: Oracle GoldenGate leverages a component-based architecture to optimize real-time
information access and availability.

  • Capture

Oracle GoldenGate’s Capture module resides on the source database and looks for new
transactional activity. The Capture module reads the result of insert, update, and delete
operations by directly accessing the database transaction (redo) logs, and then immediately
captures new and changed data for distribution.
The Capture module only moves committed transactions—filtering out intermediate activities
and rolled-back operations—which not only reduces infrastructure load but also eliminates
potential data inconsistencies. Further optimization is achieved through transaction grouping
and optional compression features.
Oracle GoldenGate 11g can also capture messages from JMS messaging systems to deliver to
heterogeneous databases in real time for scalable and reliable data distribution.

  • Trail Files

Oracle GoldenGate’s Trail Files contain the database operations for the changed data in a
transportable, platform-independent data format. Trail Files are a critical component within
Oracle GoldenGate’s optimized queuing mechanism. They reside on the source and/or target
server but exist outside of the database to ensure heterogeneity, improved reliability, and
minimal data loss. This architecture minimizes impact to the source system because no
additional tables or queries to the database are required to support the data capture process.
The Capture module reads once, and then immediately moves the captured data to the external
Trail File for delivery to the target(s).
In the event of an outage at the source and/or target, the Trail Files contain the most-recent
data up to the point of the outage, and the data is applied once the systems are online again.

  • Delivery

Oracle GoldenGate’s Delivery module takes the changed data from the latest Trail File and
applies it to the target database using native SQL for the appropriate relational database
management system. Delivery can be made to any open database connectivity–compliant
database. The Delivery module applies each transaction in the same order as it was committed
and within the same transactional context as at the source, enabling consistency and referential
integrity at the target. To enhance IT flexibility, captured data can also be delivered to a Java
Message Service destination or as a flat file using Oracle GoldenGate Application Adapters.

For full documentation, you can refer to the following pdf file: http://www.oracle.com/us/products/middleware/data-integration/goldengate11g-ds-168062.pdf?ssSourceSiteId=otnen

ORA-00600 internal error caused by /tmp swap full

June 22nd, 2012 No comments

Today we encountered a problem when oracle failed to functioning. After some checking, this error was caused by /tmp running out of space. This also confirmed by OS logs:

Jun 20 17:43:59 tmpfs: [ID 518458 kern.warning] WARNING: /tmp: File system full, swap space limit exceeded

Oracle uses /tmp to compile PL/SQL code, so if there no space it unable to compile/execute. Which causing functions/procedures/packeges and trigers to timeout. The same also described in oracle note: ID 1389623.1

So in order to prevent further occurrences of this error, we should increase /tmp on the system to at least 4Gb.

There is an Oracle parameter to change the default location of these temporary files(_ncomp_shared_objects_dir), but it’s not a dynamic parameter. And also, while there is a way to resize a tmpfs filesystem online but it’s somehow risky. So the best idea is that, we firstly bring down Oracle DB on this host, then modify /etc/vfstab, and then reboot the whole system. This way will protect our data against the risk of corruption or lost etc, also it’ll have some outage time.
So finally, here’s the steps:
Amend the line in /etc/vfstab from:

swap – /tmp tmpfs – yes size=512m

To:

swap – /tmp tmpfs – yes size=4096m

Reboot machine and bring up oracle DB

using oracle materialized view with one hour refresh interval to reduce high concurrency

June 8th, 2012 No comments

If your oracle DB is at a very high concurrency and you find that the top sqls are some views, then there’s a quick way to resolve this: using oracle materialized view. You may consider setting the refresh interval to one hour which means the view will refresh every hour. After the setting go live, you’ll find the normal performance will appear.

For more information about oracle materialized view, you can visit http://en.wikipedia.org/wiki/Materialized_view

Here’s a image with high oracle concurrency:

oracle high concurrency

Oracle views – Static Data Dictionary Views & Dynamic Performance Views

November 5th, 2011 1 comment

Views are customized presentations of data in one or more tables or other views. You can think of them as stored queries. Views do not actually contain data, but instead derive their data from the tables upon which they are based. These tables are referred to as the base tables of the view.

Similar to tables, views can be queried, updated, inserted into, and deleted from, with some restrictions. All operations performed on a view actually affect the base tables of the view. Views can provide an additional level of security by restricting access to a predetermined set of rows and columns of a table. They can also hide data complexity and store complex queries.

Many important views are in the SYS schema. There are two types: static data dictionary views and dynamic performance views. Complete descriptions of the views in theSYS schema are in Oracle Database Reference.

Static Data Dictionary Views

The data dictionary views are called static views because they change infrequently, only when a change is made to the data dictionary. Examples of data dictionary changes include creating a new table or granting a privilege to a user.

Many data dictionary tables have three corresponding views:

  • DBA_ view displays all relevant information in the entire database. DBA_ views are intended only for administrators.An example of a DBA_ view is DBA_TABLESPACES, which contains one row for each tablespace in the database.
  • An ALL_ view displays all the information accessible to the current user, including information from the schema of the current user, and information from objects in other schemas, if the current user has access to those objects through privileges or roles.An example of an ALL_ view is ALL_TABLES, which contains one row for every table for which the user has object privileges.
  • USER_ view displays all the information from the schema of the current user. No special privileges are required to query these views.An example of a USER_ view is USER_TABLES, which contains one row for every table owned by the user.

The columns in the DBA_ALL_, and USER_ views are usually nearly identical.

Dynamic Performance Views

Dynamic performance views monitor ongoing database activity. They are available only to administrators. The names of dynamic performance views start with the characters V$. For this reason, these views are often referred to as V$ views.

An example of a V$ view is V$SGA, which returns the current sizes of various System Global Area (SGA) memory components.

Categories: Databases Tags:

Partitioned Tables and Indexes & Compressed Tables of oracle database

November 5th, 2011 No comments
1.Partitioned Tables and Indexes

You can partition tables and indexes. Partitioning helps to support very large tables and indexes by enabling you to divide the tables and indexes into smaller and more manageable pieces called partitions. SQL queries and DML statements do not have to be modified to access partitioned tables and indexes. Partitioning is transparent to the application.

After partitions are defined, certain operations become more efficient. For example, for some queries, the database can generate query results by accessing only a subset of partitions, rather than the entire table. This technique (called partition pruning) can provide order-of-magnitude gains in improved performance. In addition, data management operations can take place at the partition level, rather than on the entire table. This results in reduced times for operations such as data loads; index creation and rebuilding; and backup and recovery.

Each partition can be stored in its own tablespace, independent of other partitions. Because different tablespaces can be on different disks, this provides a table structure that can be better tuned for availability and performance. Storing partitions in different tablespaces on separate disks can also optimize available storage usage, because frequently accessed data can be placed on high-performance disks, and infrequently retrieved data can be placed on less expensive storage.

Partitioning is useful for many types of applications that manage large volumes of data. Online transaction processing (OLTP) systems often benefit from improvements in manageability and availability, while data warehousing systems benefit from increased performance and manageability.

As with tables, you can partition an index. In most situations, it is useful to partition an index when the associated table is partitioned, and to partition the index using the same partitioning scheme as the table. (For example, if the table is range-partitioned by sales date, then you create an index on sales date and partition the index using the same ranges as the table partitions.) This is known as a local partitioned index. However, you do not have to partition an index using the same partitioning scheme as its table. You can also create a nonpartitioned, or global, index on a partitioned table.

2.Compressed Tables

Table Compression is suitable for both OLTP applications and data warehousing applications. Compressed tables require less disk storage and result in improved query performance due to reduced I/O and buffer cache requirements. Compression is transparent to applications and incurs minimal overhead during bulk loading or regular DML operations such as INSERT, UPDATE or DELETE.

Backup by oracle on client xxx using policy ORACLE_PROD_DAILY_DB

July 6th, 2011 No comments

Sometimes you got error about netbackup hasn’t been done successfully and what’s weird was that no daily backup logs were created then. Actually, at that same time, the system log will show that xinetd is rejecting the connection:

xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4

Problem

GENERAL ERROR: Network errors during Oracle RMAN backups when using xinetd resulting in status 6, status 23, status 25, status 41, or status 58

Error

ORA-27028: skgfqcre: sbtbackup returned error

Solution

Overview:The xinetd process receives inbound socket connections from remote hosts and may reject the connections without starting the expected NetBackup process if there have recently been too many connections from the remote host.

Troubleshooting:

The problem may have many different NetBackup symptoms depending on the connecting process, target process, and connection method.  But in all cases, the destination process will not be started and will not create any debug log entries at the time of the expected connection.  At that same time, the system log will show that xinetd is rejecting the connection, e.g.

xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4

Log Files:

This bpbrm log shows a successful connection to bpcd followed by a forwarding socket request, both reach the client host and receive a file descriptor (fd).  However, xinetd closed the forwarding socket immediately as indicated by the errno 232.

08:20:38.890 [5871] <2> logconnections: BPCD CONNECT FROM 1.2.3.4.54105 TO 2.3.4.5.13724
...snip...
08:20:38.891 [5871] <2> vnet_connect_to_vnetd_extra: vnet_vnetd.c.179: msg: VNETD CONNECT FROM 1.2.3.4.54106 TO 2.3.4.5.13724 fd = 10
08:20:38.892 [5871] <2> vnet_pop_byte: vnet.c.184: errno: 232 0x000000e8
08:20:38.892 [5871] <2> vnet_pop_byte: vnet.c.185: Function failed: 43 0x0000002b
08:20:38.893 [5871] <2> vnet_pop_string: vnet.c.266: Function failed: 43 0x0000002b
08:20:38.893 [5871] <2> vnet_pop_signed: vnet.c.310: Function failed: 43 0x0000002b
08:20:38.893 [5871] <2> version_connect: vnet_vnetd.c.1812: Function failed: 43 0x0000002b
...snipped three retries from bpbrm to vnetd, all of them closed by xinetd...
08:20:41.943 [5871] <2> bpcr_vnetd_connect_forward_socket_begin: nb_vnetd_connect(2.3.4.5) failed: 25
08:20:41.943 [5871] <2> local_bpcr_connect: bpcr_vnetd_connect_forward_socket_begin failed: 25
08:20:41.943 [5871] <2> ConnectToBPCD: bpcd_connect_and_verify(myhost, myhost) failed: 25
08:20:41.944 [5871] <16> bpbrm start_bpcd_stat: cannot connect to myhost, Connection reset by peer (232)

The bpcd log shows a failure waiting for bpbrm to send the forwarding socket information.

08:20:38.912 [24709] <2> logconnections: BPCD ACCEPT FROM 1.2.3.4.54105 TO 2.3.4.5.13724
...snip...
08:20:39.002 [24709] <2> bpcd main: output socket port number = 1
08:20:41.942 [24709] <2> get_long: (2) premature end of file (byte 1)
08:20:41.942 [24709] <2> get_vnetd_forward_socket: get_string ipc_string failed: 5
08:20:41.942 [24709] <16> bpcd main: get_vnetd_forward_socket failed: 23

The client vnetd log does not show any activity at the time of the forwarding socket requests.

08:20:38.889 [24709] <2> launch_command: vnetd.c.2125: path: /usr/openv/netbackup/bin/bpcd
...no entries during this time span...
08:20:42.227 [24672] <2> vnet_send_network_socket: vnet_vnetd.c.1535: hash_str2: 7f9786a02bd04ad3f585c06892bd74c1

The system messages log shows that xinetd rejected the 4 vnetd connections.

Jul 16 08:20:38 myhost xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4
Jul 16 08:20:39 myhost xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4
Jul 16 08:20:40 myhost xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4
Jul 16 08:20:41 myhost xinetd[2926]: FAIL: vnetd per_source_limit from=1.2.3.4

Another example bpbrm connection failure attempt to bpcd to perform a comm file update.

<16> bpbrm send_info_via_progress_file: cannot connect to <clientname>, Operation now in progress (150)

Resolution:

This situation can be resolved by adjusting the xinetd configuration to permit sufficient per_source connections for NetBackup operations.  On some operating system platforms, the default per_source value can be as low as 1-10.  A NetBackup master/media server may rapidly make more than this many connections to the client host, depending on the type of backup and connection method; up to 4 sockets per concurrent Automatic backup job and 9 sockets per concurrent Application backup job.

Review the xinetd configuration documentation and make appropriate changes for the platform and expected number of simultaneous NetBackup operations.  After making the changes, send a SIGHUP to the xinetd process to force a read of the updated configuration.

If appropriate, the default xinetd settings for all services can be changed in the /etc/xinitid.conf file, e.g.

defaults
{
per_source = <new connection limit or UNLIMITED>
}

Alternatively, the xinetd configuration settings for specific services can be changed in service specific files, e.g. /etc/xinetd.d/vnetdcould contain the following setting.

service vnetd
{
per_source = <new connection limit or UNLIMITED>
}

The NetBackup services that are most likely to be affected are vnetd and bpcd.

Plan to minimally set the number of per_source connections to:
The number of channels or concurrent backup streams * 9 (using the higher level required for application backups) = the per_source minimum setting.

Example: Given a client running Oracle RMAN Application type backups (requiring up to 9 concurrent connections), with 8 channels, the minimum setting would be 72 for per_source.

Legacy ID

300810

From:http://www.symantec.com/business/support/index?page=content&id=TECH58196

Categories: Databases Tags:

Mysql tricks – fix replication error

April 17th, 2011 No comments

Fixing MySQL service start problem:”manager of pid file could nor be found”

1. Check for any invalid entry in /apps/mysql/etc/my.cnf and correct it and restart service. If unsuccessfull, follow step#2 and below.

2. Delete all binary log, error log files under the directories below and start the service:

/apps/mysql/log
/apps/mysql/log.replicate
/apps/mysql/log.relay

3. If step#2 is successful and required to fix service startup, reset replication state using the command:

a) shell> mysql -h host -u user ?p
b) mysql> reset master;
c) mysql> stop slave;
d) mysql> reset slave;
e) mysql> start slave;

Note: Step#3 needs to be executed in both the servers in the replication pair

Get mysql version info and server variables from server

Shell> mysqladmin version ?h -u -p

Shell> mysqladmin variables ?h -u -p

Fix replication error (if slave is NOT beyond the current binary log file master is using)

This is to skip the current transaction that the replication thread is hung on and continue (at SLAVE server). You can issue the following command in mysql to skip a transaction:

Mysql> STOP SLAVE;

mysql> SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;

NOTE:

1.Use the value 1 for any SQL statement that does not use AUTO_INCREMENT or LAST_INSERT_ID(), otherwise you will need to use the value 2.

2.Statements that use AUTO_INCREMENT or LAST_INSERT_ID() take up 2 events in the binary log.

mysql> START SLAVE;

mysql> SHOW SLAVE STATUS\G;

Continue repeating the steps until the replication error is cleared.

Fix replication error (if slave is beyond the current binary log file master is using)

You need to follow backup databases from master server and restore it to slave server to sync and reinitiate replication. Follow the steps below to do that:

Step#1: Stop slave thread at the slave server (on Slave server):

mysql> STOP SLAVE;

Step#2: Impose a global read lock (in Master Server). Start the command line client and flush all tables and block write statements by executing the FLUSH TABLES WITH READ LOCK statement:

mysql> FLUSH TABLES WITH READ LOCK;

Note: DO NOT close the connection console from where the command is executed, otherwise lock will be released

Step#3: Backup databases from Master server Backup the databases from the master server through executing the command below:

shell> mysqldump ?h -u -p –all-databases –opt –routines –master-data –single-transaction > dbdump.db

Step#4: Restore databases at Slave Server Restore the database at server with the backup taken at step#3, using the command below

Shell> mysql -h -u -p < dbdump.db

Step#5: Start Slave thread (at slave server) Start slave thread at the slave server using the command:

mysql> START SLAVE;

mysql> SHOW SLAVE STATUS\G;

Step#6: Release global read lock at Master Server Release the write block at master server imposed at step#2 using the command below at the same console used:

mysql> UNLOCK TABLES;

mysql> exit;

Reset replication

If the data are already synced between the master server and slave server and you only need to reset replication status to its initial state, execute the commands below in BOTH master and slave servers:

shell> mysql ?h -u -p mysql> reset master;

mysql> stop slave;

mysql> reset slave;

mysql> start slave;

Change replication configuration setting

If you change the replication configuration setting in the my.cnf configuration file and mysql service is not catching it up after a restart, follow any one of the step below:

1. Stop mysql server

2. Delete the file: /apps/mysql/log.replicate/master.info

3. Update mysql configuration file with the desired replication configuration setting (if not already done).

4. Start the mysql service

NOTE:

Replication configuration setting can also be changed in run-time through mysql command:

mysql> CHANGE MASTER TO -> MASTER_HOST=’master_host_name’, -> MASTER_USER=’replication_user_name’, -> MASTER_PASSWORD=’replication_password’, -> MASTER_LOG_FILE=’recorded_log_file_name’, -> MASTER_LOG_POS=recorded_log_position;

Point in time restore

mysql -uroot ?mypassword POSITIONING EXAMPLE

shell> mysqlbinlog –stop-position=”368312″ /var/log/mysql/bin.123456 \ | mysql -u root -p shell> mysqlbinlog –start-position=”368315″ /var/log/mysql/bin.123456 \ | mysql -u root -p

NOTE:

In this example, I am assuming that log-bin mode has been used to start the daemon. You can also start this using MySQL admin tool. The logfiles are usually stored in the base directory of the data folder. This example uses a stop date to specify when the log replay should stop. You can also use log positioning. To know which log to use, use SHOW MASTER STATUS.

Examining binary log

mysqlbinlog –start-date=”2008-05-01 13:49:00″ –stop-date=”2008-05-01 13:51:59″ binary_logfile.000004 > C:\drop\mysql_restore.sql

NOTE:

This will produce a file and give dates and positions on what changes occurred so you can more accurately specify what stop and start times to use.

Categories: Databases Tags:

mysql most frequently used sql statements:login,set password,create-import-dump database,add user,repair table,etc.

October 13th, 2010 No comments

Log in:
mysql -h localhost -u root -pyourpassword
set password:
set password for ‘root’@’localhost’ = password(’123456′);
create database:
CREATE DATABASE `databasename` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;

add mysql account
CREATE USER ‘user’@’%’ IDENTIFIED BY ‘yourpassword’;
GRANT ALL PRIVILEGES ON *.* TO ‘user’@’%’ IDENTIFIED BY ‘yourpassword’ WITH GRANT OPTION MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0 ;
flush privileges;
dump database
mysqldump -u root -pyourpassword –no-data –default-character-set=utf8 test>aaa.sql #–no-data,just the table definition without real lines
import
mysql -h localhost -u root -pyourpassword dbname</root/db.sql
repair table
repair table tablename;
#in detail:http://dev.mysql.com/doc/refman/5.1/en/table-maintenance-sql.html

Database Administration Statements
set,show,account management
#in detail:http://dev.mysql.com/doc/refman/5.1/en/sql-syntax-server-administration.html

All sql statements
in detail:http://dev.mysql.com/doc/refman/5.1/en/sql-syntax.html

view amounts of active mysql connections
./mysqladmin -uroot -p -hlocalhost processlist #Or use ‘show processlist’ in mysql>;
./mysqladmin -uroot -p -hlocalhost status #Threads is the amount of connections
executives under mysql/bin
In detail:http://dev.mysql.com/doc/refman/5.1/en/programs.html
innochecksum                mysql_find_rows
msql2mysql                  mysql_fix_extensions
myisamchk                   mysql_fix_privilege_tables
myisam_ftdump               mysqlhotcopy
myisamlog                   mysqlimport
myisampack                  mysql_install_db
my_print_defaults           mysql_secure_installation
mysql                       mysql_setpermission
mysqlaccess                 mysqlshow
mysqladmin                  mysqlslap
mysqlbinlog                 mysqltest
mysqlbug                    mysql_tzinfo_to_sql
mysqlcheck                  mysql_upgrade
mysql_client_test           mysql_waitpid
mysql_config                mysql_zap
mysql_convert_table_format  perror
mysqld_multi                replace
mysqld_safe                 resolveip
mysqldump                   resolve_stack_dump
mysqldumpslow

Solved:mysql without access to remote hosts

August 23rd, 2010 No comments

1.Firstly,find your mysql my.cnf configuration file:(mysql –help|grep “Default options are read” -1 #-1,show 1 line before and 1 line after the match line)
Default options are read from the following files in the given order:
/etc/mysql/my.cnf ~/.my.cnf /usr/etc/my.cnf
2.vim and 47gg to jump to the 47 line,comment away bind-address

3.log in mysql,and execute the following sql statement:
CREATE USER ‘root’@’%’;
GRANT ALL PRIVILEGES ON * . * TO ‘root’@’%’ WITH GRANT OPTION MAX_QUERIES_PER_HOUR 0
MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0 ;
SET PASSWORD FOR ‘root’@’%’ = PASSWORD( ‘******’ ) #密码

Categories: Databases Tags: