Sunday, October 4, 2009

Oracle EBUS R12 12.0.0 - Database upgrade to 11.1.0.7

Oracle EBUS suite R12 12.0.4 and higher version is supported with Oracle 11G database.

Please see DOC ID 735276.1 more information.


Brief Overview
--------------
Current environment - Redhat Linux ES 4 and R12 Base release 12.0.0

I have upgraded the 12.0.0 release to 12.0.4, use Patch 6435000.It took 4 hours.

Following patches are must
1.6928236
2.6400501
3.12.0.4 Consolidated Patch 1 7207440
4.Time Zone patch 5632264.
5.Oracle Applications DBA Release Update Pack 4 (R12.AD.A.DELTA.4) patch
6.6435000
7.7486407
8.7684818

Steps
-----
1.Apply Patches

Apply the patches in the following order
Oracle Applications DBA Release Update Pack 4 (R12.AD.A.DELTA.4) patch
6435000
6928236
6400501
12.0.4 Consolidated Patch 1 7207440
7486407
7684818

2.I am not changing the Port or any db specific configuration.
3.Install 11.1.0.6 on separate home
4.Upgrade 11.1.0.6 to 11.1.0.7. Download patch 6890831
5.Install Oracle Database 11g product 11.1.0.6 on to new Oracle home
6.upgrade 11g product 11.1.0.6 to 11.1.0.7 using the same patch 6890831
7.create nls/data/9idata directory to map to 11G home
8.Shutdown Apps Tiers,Listeners and DB
9.Take a Cold backup of the current database (10.2.0.2)
10.Have enough free space for the SYSAUX (600MB) and SYSTEM (600MB)Tablespaces
11.APPLSYS.FND_STATTAB table has to be upgraded after the database upgrade
12.create pfile from OLD OH/dbs and copy this to the new OH/dbs folder
13.update the following init.ora parameters
_disable_fast_validate=TRUE
event="31151 trace name context forever, level 0x100"
set diagnostic_dest to Oracle_base and comment all back ground destinations like
udump,bdump etc
compatible=’11.1.0′
14.Set the Environment variables like ORACLE_HOME,LD_LIBRARY_PATH,PATH,ORACLE_SID etc...
15. startup the database from Old Oracle home and run the following script from new Oracle_home/rdbms/admin directory

SQL> @utlu111i.sql
Oracle Database 11.1 Pre-Upgrade Information Tool 10-04-2009 21:17:34
.
**********************************************************************
Database:
**********************************************************************
--> name: RAC
--> version: 10.2.0.2.0
--> compatible: 10.2.0
--> blocksize: 8192
--> platform: Linux IA (32-bit)
--> timezone file: V3
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 9815 MB
--> CTXD tablespace is adequate for the upgrade.
.... minimum required size: 15 MB
--> ODM tablespace is adequate for the upgrade.
.... minimum required size: 10 MB
--> APPS_UNDOTS1 tablespace is adequate for the upgrade.
.... minimum required size: 1057 MB
--> APPS_TS_TX_DATA tablespace is adequate for the upgrade.
.... minimum required size: 3889 MB
--> APPS_TS_QUEUES tablespace is adequate for the upgrade.
.... minimum required size: 86 MB
--> OLAP tablespace is adequate for the upgrade.
.... minimum required size: 16 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 334 MB
.
**********************************************************************
Update Parameters: [Update Oracle Database 11.1 init.ora or spfile]
**********************************************************************
-- No update parameter changes are required.
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.1 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.1 init.ora or spfile]
**********************************************************************
--> "background_dump_dest" replaced by "diagnostic_dest"
--> "user_dump_dest" replaced by "diagnostic_dest"
--> "core_dump_dest" replaced by "diagnostic_dest"
.
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views [upgrade] VALID
--> Oracle Packages and Types [upgrade] VALID
--> JServer JAVA Virtual Machine [upgrade] VALID
--> Oracle XDK for Java [upgrade] VALID
--> Real Application Clusters [upgrade] INVALID
--> OLAP Analytic Workspace [upgrade] VALID
--> OLAP Catalog [upgrade] VALID
--> Oracle Text [upgrade] VALID
--> Oracle XML Database [upgrade] VALID
--> Oracle Java Packages [upgrade] VALID
--> Oracle interMedia [upgrade] VALID
--> Spatial [upgrade] VALID
--> Data Mining [upgrade] VALID
--> Oracle OLAP API [upgrade] VALID
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> Database is using an old timezone file version.
.... Patch the 10.2.0.2.0 database to timezone file version 4
.... BEFORE upgrading the database. Re-run utlu111i.sql after
.... patching the database to record the new timezone file version.
WARNING: --> Database contains stale optimizer statistics.
.... Refer to the 11g Upgrade Guide for instructions to update
.... statistics prior to upgrading the database.
.... Component Schemas with stale statistics:
.... SYS
.... OLAPSYS
.... CTXSYS
.... XDB
.... ORDSYS
.... MDSYS
WARNING: --> Database contains INVALID objects prior to upgrade.
.... The list of invalid SYS/SYSTEM objects was written to
.... registry$sys_inv_objs.
.... The list of non-SYS/SYSTEM objects was written to
.... registry$nonsys_inv_objs.
.... Use utluiobj.sql after the upgrade to identify any new invalid
.... objects due to the upgrade.
.... USER APPS has 1 INVALID objects.
WARNING: --> Database contains schemas with objects dependent on network
packages.
.... Refer to the 11g Upgrade Guide for instructions to configure Network ACLs.
.... USER APPS has dependent objects.
.

PL/SQL procedure successfully completed.

You can see the warning message related to Network ACLs. This is related to XML DB and Fine grain Access control to external Network services. Please refer to the Oracle upgrade document for more information.

16. SQL> select * from v$timezone_file;

FILENAME VERSION
------------ ----------
timezlrg.dat 4

This value will be 3 if Patch 5632264 (for 10.2.0.2) not applied.

17.Fine tune your 10.2.0.2 database before this upgrade like log buffer,buffer cache,shared pool, java pool, logfile size with enough groups.
You can disable the archive mode for more optimal performance.
18.Shutdown the database (from Old Oracle home)
19.startup the database in upgrade mode from new Oracle 11G home
20.upgrade the database using the script utlu111s.sql
SQL> startup pfile=init11g.ora upgrade
ORACLE instance started.

Total System Global Area 8142679040 bytes
Fixed Size 1314580 bytes
Variable Size 1452985068 bytes
Database Buffers 5660944384 bytes
Redo Buffers 27435008 bytes
Database mounted.
Database opened.

SQL> spool 11g_upgrade.log
SQL> !ls -ltr $ORACLE_HOME/rdbms/admin/catupgrd.sql
-rw-r--r-- 1 oracle oinstall 4026 Apr 2 2007 /u02/app/oracle/product/11g//rdbms/admin/catupgrd.sql

SQL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql
DOC>#######################################################################
DOC>#######################################################################
DOC>
DOC> The first time this script is run, there should be no error messages
DOC> generated; all normal upgrade error messages are suppressed.
DOC>
DOC> If this script is being re-run after correcting some problem, then
DOC> expect the following error which is not automatically suppressed:
DOC>
DOC> ORA-00001: unique constraint () violated
DOC> possibly in conjunction with
DOC> ORA-06512: at "", line NN
DOC>
DOC> These errors will automatically be suppressed by the Database Upgrade
DOC> Assistant (DBUA) when it re-runs an upgrade.

This upgrade took approx 1 hr
But after the upgrade, there were 150K invalid objects, which took approx 4 hours to compile.


Will post soon.

Thursday, September 3, 2009

Monday, August 17, 2009

Sunday, August 16, 2009

Oracle Data Warehouse Tuning - 25 Tips

Things to know about Tuning Oracle Data Warehouse Databases -
-----------------------------------------------------------

1.Dimensions and Fact Tables
2.De-Normalization
3.RAID Levels (Design) - specific to DW applications
4.Big Tablespaces - Where single datafile can grow upto 128TB
5.Block size - 16K or 32K - Depends on the Operating system
6.Partitioning Options - which needs License from Oracle
Range,Hash,List and Composite partitions
7.Partition Operations - DDL specific
8.Bitmap Indexes - Helps a lot
9.Functional base indexes
10.Data Compresssion
11.Direct Data load operations
12.Oracle Joins - Nested , Merge and Hash joins
13.SQL Tuning - Need Good understanding of SQL Tuning
14.Parallel Operations
15.Materilaized Views
16.Dimensions
17.Query Rewrite option
18.AWR reports
19.ADDM
20.SQL Tuning Advisor
21.Reoranization
22.CBO Statistics
23.SQL Hints - This helps a lot in real time
24.Ofcourse some init.ora parameters like CBO related , MTS etc
25.IOT

RAC Features for Data Warehouse Databases
---------------------------------------

1.Automatic Workload Management
2.Parallel Query Options
3.Parallel Instance Groups
4.DOP
5.Be aware of the Inter connect traffic
6.Services
7.Partitions specific to RAC applications
8.SQL tuning
9.Dedicated Temp tablespaces
10.TAF

Oracle Portal 10.1.2.0.2 HA Implementation - HA

I already covered Oracle IAS 10.1.2.0.2 Disaster Recovery solution for Oracle AS Components (For IM and Portal) - Please see my previous thread for this.

I will cover Oracle Portal HA setup now.


For 11G Database Support
------------------------
Ofcourse Oracle Portal 10.1.4.1/10.1.4.2 is now ceritified with 11G database.For Portal 10.1.4.1, follow the recommended patches Doc ID 460362.1

Apply 10.1.2.3 patch set (5983622 Patch) to upgrade to 10.1.4.2 and upgrade the database to 11.1.0.6


Architecture Overview -
----------------------
2 Node Portal (Active/Active) cluster running on Linux ES AS 5. This Portal will be running on Oracle RAC 10G.

2 Nodes for IM.Virtaul host name IM
2 Nodes for OID.Virtaul host name OID
2 Nodes for RAC.
2 Nodes for Oracle Portal.Virtaul host name Portal
1 Node for LBR
1 Node for Linux DNS

All these nodes configured in the DNS for HN resolution.

Will post soon.....

Monday, August 10, 2009

Oracle 11G Active DataGuard Implementation using Active database (network) duplicate (RMAN) command - HA






Posting a very intresting topic on Oracle 11G. Initially I had some issues like ORA-01034, ORA-12528, RMAN-06217, ORA-00845 errors.Unfortunately I was not able to get much detailed information on how to implement this from google. It look me 2 hours to implement - the setup is the same as any Normal Dataguard (Oracle 10G) implementation. No major difference.

This feature is called Real-time Query database - that there is no suspension of Redo Apply. It uses incremental backup feature based on a change-tracking file. I mean like keeping track of all changed blocks through bitmaps. So much better performance. I would say that this should benefit a lot during the real time processing.





Oracle 11G performance improvement (Dataguard specific)
-------------------------------------------------------
1. Faster Failover - Failover in seconds with Fast-Start Failover.
2. Faster Redo Transport.
Optimized async transport for Maximum Performance Mode
Redo Transport Compression for gap fetching:new compression attribute for
log_archive_dest_n.
3. Faster Redo Apply
Parallel media recovery optimization.
4. Faster SQL Apply
Internal optimizations.
5. Fast incremental backup on physical standby database
Support for block change tracking.



(This Picture is about FAST FAILOVER - THIS SETUP NEEDs A THIRD NODE CALLED, SUBSCRIBER (MONITOR) AND DG BROKER SETUP. I will cover this topic soon). Oracle uses the same technology as what Microsoft does with SQL Server 2005, called Mirroring feature here ...!


Most important Prerequisites are
-------------------------------
1. Both the target and destination databases must be on an identical operating system platform.
2. Oracle Net must be aware of both the target and duplicate instances.
3. Both the target and destination databases must have the same sysdba password.
4. The target database must be open or in mount state.
5. If the target database is open, it must be in archivelog mode


Prepare your environments - both primary and secondary databases.This is the basic configuration like any standby setup. So I am skipping this.
Once everything is setup,

starup the primary database (in this case DG database). The primary can be either in mounted or open stage.

Start the standby instance (not mounted).

In pre 11G we used to create the standby control file and copy over it to the standby site. We also copy the datafiles to the standby site.

But in 11G this steps are not needed. Everything is automatic and so this process is called "Database Active Network duplicate" process.

Demo
----

DG - Primary Database
DG1 - Standby Database

From primary database - DG
--------------------------
SQL> alter database force logging;

Database altered.

SQL> select force_logging from v$database;

FOR
---
YES

Active database (network) duplicate (RMAN) command
--------------------------------------------------

RMAN> duplicate target database for standby
2> db_file_name_convert '/dg/','/dg1/'
3> DORECOVER FROM ACTIVE DATABASE
4> spfile
5> parameter_value_convert '/dg/','/dg1/'
6> set log_file_name_convert '/dg/','/dg1/'
7> set fal_client='dgstdby'
8> set fal_server='dg'
9> set log_archive_dest_1='LOCATION=/u01/oracle/dg1/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=dg1'
10> set log_archive_dest_2='SERVICE=dg LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=dg'
11> set standby_archive_dest='/u01/oracle/oradata/dg1'
12> set db_unique_name='dg1';
13>
Starting Duplicate Db at 12-AUG-09
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=153 device type=DISK

contents of Memory Script:
{
backup as copy reuse
file '/u01/app/oracle/oracle11g/dbs/orapwdg' auxiliary format
'/u01/app/oracle/oracle11g/dbs/orapwdg1' file
'/u01/app/oracle/oracle11g/dbs/spfiledg.ora' auxiliary format
'/u01/app/oracle/oracle11g/dbs/spfiledg1.ora' ;
sql clone "alter system set spfile= ''/u01/app/oracle/oracle11g/dbs/spfiledg1.ora''";
}
executing Memory Script

Starting backup at 12-AUG-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=133 device type=DISK
Finished backup at 12-AUG-09

sql statement: alter system set spfile= ''/u01/app/oracle/oracle11g/dbs/spfiledg1.ora''

contents of Memory Script:
{
sql clone "alter system set audit_file_dest =
''/u01/app/oracle/admin/dg1/adump'' comment=
'''' scope=spfile";
sql clone "alter system set control_files =
''/u01/oracle/oradata/dg1/control01.ctl'', ''/u01/oracle/oradata/dg1/control02.ctl'', ''/u01/oracle/oradata/dg1/control03.ctl'' comment=
'''' scope=spfile";
sql clone "alter system set log_file_name_convert =
''/dg/'', ''/dg1/'' comment=
'''' scope=spfile";
sql clone "alter system set fal_client =
''dgstdby'' comment=
'''' scope=spfile";
sql clone "alter system set fal_server =
''dg'' comment=
'''' scope=spfile";
sql clone "alter system set log_archive_dest_1 =
''LOCATION=/u01/oracle/dg1/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=dg1'' comment=
'''' scope=spfile";
sql clone "alter system set log_archive_dest_2 =
''SERVICE=dg LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=dg'' comment=
'''' scope=spfile";
sql clone "alter system set standby_archive_dest =
''/u01/oracle/oradata/dg1'' comment=
'''' scope=spfile";
sql clone "alter system set db_unique_name =
''dg1'' comment=
'''' scope=spfile";
shutdown clone immediate;
startup clone nomount ;
}
executing Memory Script

sql statement: alter system set audit_file_dest = ''/u01/app/oracle/admin/dg1/adump'' comment= '''' scope=spfile

sql statement: alter system set control_files = ''/u01/oracle/oradata/dg1/control01.ctl'', ''/u01/oracle/oradata/dg1/control02.ctl'', ''/u01/oracle/oradata/dg1/control03.ctl'' comment= '''' scope=spfile

sql statement: alter system set log_file_name_convert = ''/dg/'', ''/dg1/'' comment= '''' scope=spfile

sql statement: alter system set fal_client = ''dgstdby'' comment= '''' scope=spfile

sql statement: alter system set fal_server = ''dg'' comment= '''' scope=spfile

sql statement: alter system set log_archive_dest_1 = ''LOCATION=/u01/oracle/dg1/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=dg1'' comment= '''' scope=spfile

sql statement: alter system set log_archive_dest_2 = ''SERVICE=dg LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=dg'' comment= '''' scope=spfile

sql statement: alter system set standby_archive_dest = ''/u01/oracle/oradata/dg1'' comment= '''' scope=spfile

sql statement: alter system set db_unique_name = ''dg1'' comment= '''' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 1255473152 bytes

Fixed Size 1299624 bytes
Variable Size 721423192 bytes
Database Buffers 520093696 bytes
Redo Buffers 12656640 bytes

contents of Memory Script:
{
backup as copy current controlfile for standby auxiliary format '/u01/oracle/oradata/dg1/control01.ctl';
restore clone controlfile to '/u01/oracle/oradata/dg1/control02.ctl' from
'/u01/oracle/oradata/dg1/control01.ctl';
restore clone controlfile to '/u01/oracle/oradata/dg1/control03.ctl' from
'/u01/oracle/oradata/dg1/control01.ctl';
sql clone 'alter database mount standby database';
}
executing Memory Script

Starting backup at 12-AUG-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/u01/app/oracle/oracle11g/dbs/snapcf_dg.f tag=TAG20090812T030007 RECID=1 STAMP=694666809
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 12-AUG-09

Starting restore at 12-AUG-09
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=154 device type=DISK

channel clone_default: skipped, AUTOBACKUP already found
channel ORA_AUX_DISK_1: skipped, AUTOBACKUP already found
channel ORA_DISK_1: copied control file copy
Finished restore at 12-AUG-09

Starting restore at 12-AUG-09
using channel ORA_AUX_DISK_1

channel clone_default: skipped, AUTOBACKUP already found
channel ORA_AUX_DISK_1: skipped, AUTOBACKUP already found
channel ORA_DISK_1: copied control file copy
Finished restore at 12-AUG-09

sql statement: alter database mount standby database

contents of Memory Script:
{
set newname for tempfile 1 to
"/u01/oracle/oradata/dg1/temp01.dbf";
switch clone tempfile all;
set newname for datafile 1 to
"/u01/oracle/oradata/dg1/system01.dbf";
set newname for datafile 2 to
"/u01/oracle/oradata/dg1/sysaux01.dbf";
set newname for datafile 3 to
"/u01/oracle/oradata/dg1/undotbs01.dbf";
set newname for datafile 4 to
"/u01/oracle/oradata/dg1/users01.dbf";
backup as copy reuse
datafile 1 auxiliary format
"/u01/oracle/oradata/dg1/system01.dbf" datafile
2 auxiliary format
"/u01/oracle/oradata/dg1/sysaux01.dbf" datafile
3 auxiliary format
"/u01/oracle/oradata/dg1/undotbs01.dbf" datafile
4 auxiliary format
"/u01/oracle/oradata/dg1/users01.dbf" ;
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /u01/oracle/oradata/dg1/temp01.dbf in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 12-AUG-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u01/oracle/oradata/dg/system01.dbf
output file name=/u01/oracle/oradata/dg1/system01.dbf tag=TAG20090812T030032 RECID=0 STAMP=0
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/u01/oracle/oradata/dg/sysaux01.dbf
output file name=/u01/oracle/oradata/dg1/sysaux01.dbf tag=TAG20090812T030032 RECID=0 STAMP=0
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u01/oracle/oradata/dg/undotbs01.dbf
output file name=/u01/oracle/oradata/dg1/undotbs01.dbf tag=TAG20090812T030032 RECID=0 STAMP=0
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/oracle/oradata/dg/users01.dbf
output file name=/u01/oracle/oradata/dg1/users01.dbf tag=TAG20090812T030032 RECID=0 STAMP=0
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 12-AUG-09

sql statement: alter system archive log current

contents of Memory Script:
{
backup as copy reuse
archivelog like "/u01/oracle/arch/1_4_694653014.dbf" auxiliary format
"/u01/oracle/dg1/1_4_694653014.dbf" ;
catalog clone archivelog "/u01/oracle/dg1/1_4_694653014.dbf";
switch clone datafile all;
}
executing Memory Script

Starting backup at 12-AUG-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=4 RECID=3 STAMP=694666920
output file name=/u01/oracle/dg1/1_4_694653014.dbf RECID=0 STAMP=0
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 12-AUG-09

cataloged archived log
archived log file name=/u01/oracle/dg1/1_4_694653014.dbf RECID=1 STAMP=694666922

datafile 1 switched to datafile copy
input datafile copy RECID=1 STAMP=694666922 file name=/u01/oracle/oradata/dg1/system01.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=2 STAMP=694666922 file name=/u01/oracle/oradata/dg1/sysaux01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=3 STAMP=694666922 file name=/u01/oracle/oradata/dg1/undotbs01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=4 STAMP=694666922 file name=/u01/oracle/oradata/dg1/users01.dbf

contents of Memory Script:
{
set until scn 585861;
recover
standby
clone database
delete archivelog
;
}
executing Memory Script

executing command: SET until clause

Starting recover at 12-AUG-09
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/oracle/dg1/1_4_694653014.dbf
archived log file name=/u01/oracle/dg1/1_4_694653014.dbf thread=1 sequence=4
media recovery complete, elapsed time: 00:00:00
Finished recover at 12-AUG-09
Finished Duplicate Db at 12-AUG-09

Recovery Manager complete.

[oracle@Grid1 oradata]$ sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Wed Aug 12 03:03:05 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect sys/sys2@dg1 as sysdba
Connected.
SQL> select status from v$instance;

STATUS
------------
MOUNTED

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@Grid1 oradata]$ ps -ef|grep pmon
oracle 12811 1 0 02:57 ? 00:00:00 ora_pmon_dg
oracle 13649 1 0 03:00 ? 00:00:00 ora_pmon_dg1
oracle 15014 7640 0 03:03 pts/4 00:00:00 grep pmon
[oracle@Grid1 oradata]$ sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Wed Aug 12 03:03:44 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect / as sysdba
Connected to an idle instance.
SQL> connect sys/sys2@dg1 as sysdba
Connected.
SQL> select status from v$instance;

STATUS
------------
MOUNTED

Active DataGuard Process
------------------------

SQL> alter database open;

Database altered.

SQL> select DATABASE_ROLE,OPEN_MODE,PROTECTION_MODE from v$database;

DATABASE_ROLE OPEN_MODE PROTECTION_MODE
---------------- ---------- --------------------
PRIMARY READ WRITE MAXIMUM PERFORMANCE

SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0



SQL> alter database close;
alter database open
*
ERROR at line 1:
ORA-01154: database busy. Open, close, mount, and dismount not allowed now


SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
MRP0 WAIT_FOR_LOG 1 5 0 0

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/oracle/dg1/
Oldest online log sequence 0
Next log sequence to archive 0
Current log sequence 0

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/oracle/dg1/
Oldest online log sequence 13
Next log sequence to archive 0
Current log sequence 15


SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
ARCH CLOSING 1 13 1 2
ARCH CLOSING 1 14 1 2
ARCH CONNECTED 0 0 0 0
ARCH CLOSING 1 12 1 267
MRP0 WAIT_FOR_LOG 1 15 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 1 15 3 2

9 rows selected.

SQL> select DATABASE_ROLE,OPEN_MODE,PROTECTION_MODE from v$database;

DATABASE_ROLE OPEN_MODE PROTECTION_MODE
---------------- ---------- --------------------
PHYSICAL STANDBY READ ONLY MAXIMUM PERFORMANCE

SQL> select status from v$instance;

STATUS
------------
OPEN

From Node 1 -DG
---------------

SQL> select DATABASE_ROLE,OPEN_MODE,PROTECTION_MODE from v$database;

DATABASE_ROLE OPEN_MODE PROTECTION_MODE
---------------- ---------- --------------------
PRIMARY READ WRITE MAXIMUM PERFORMANCE

SQL> select name from v$database;

NAME
---------
DG

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/oracle/arch
Oldest online log sequence 13
Next log sequence to archive 15
Current log sequence 15
SQL> create table test2_dg as select * from dba_objects;

Table created.

SQL> alter system switch logfile;

System altered.


From Node 2 - DG1
-----------------
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/oracle/dg1/
Oldest online log sequence 13
Next log sequence to archive 0
Current log sequence 15
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
ARCH CLOSING 1 13 1 2
ARCH CLOSING 1 14 1 2
ARCH CONNECTED 0 0 0 0
ARCH CLOSING 1 12 1 267
MRP0 WAIT_FOR_LOG 1 15 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 1 15 3 2

9 rows selected.

SQL> desc test2_dg
ERROR:
ORA-04043: object test2_dg does not exist


SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/oracle/dg1/
Oldest online log sequence 13
Next log sequence to archive 0
Current log sequence 15


SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/oracle/dg1/
Oldest online log sequence 14
Next log sequence to archive 0
Current log sequence 16
SQL> desc test2_dg
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER VARCHAR2(30)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(30)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(19)
CREATED DATE
LAST_DDL_TIME DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
NAMESPACE NUMBER
EDITION_NAME VARCHAR2(30)

Try to create some objects from DG1 - standby database
-----------------------------------------------------

SQL> create table tet as select * from dba_objects;
create table tet as select * from dba_objects
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-16000: database open for read-only access

SQL> select DATABASE_ROLE, open_mode from v$database;

DATABASE_ROLE OPEN_MODE
---------------- ----------
PHYSICAL STANDBY READ ONLY

EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT
---------------------------------------------------------------- ----------- -------------- ----------- ------------
LNS ASYNC dest activation 1 1 100 99.95
LNS ASYNC end of log 5126 4961 500456 97.63
LNS wait on ATTACH 2 0 60 30.11
LNS wait on SENDREQ 179 0 240 1.34
LNS wait on DETACH 4 0 0 0
LNS wait on LGWR 2 0 0 0
LGWR-LNS wait on channel 539 539 602 1.12
LGWR wait for redo copy 104 0 2 .02

8 rows selected.


Hope this helps...

I will cover the Switchover and failover later.

Sunday, August 9, 2009

SAP ECC 6 and MSCS configuration (Active/Standby)- HA






Sample Picture - Single SAP system in the MSCS Cluster







6 Node cluster configuration - Windows 2003 Enterprise edition.
--------------------------------------------------------------

This is one of the SAP's recommended HA configuration.

Node1 - Oracle 10G installed - 6GB RAM,Dual Xeon 16 cores - Oracle Binaries in F Local Drive and Datafiles/logfiles/controlfiles in O Shared drive for Oracle Datafiles - Called O Drives 300GB
Node2 - Oracle 10G installed - 6GB RAM,Dual Xeon 16 cores - Oracle Binaries in F Local Drive and Datafiles/logfiles/controlfiles in O Shared drive for Oracle Datafiles - Called O Drives 300GB
Node1 - First MSCS node - Running SCS - 4GB RAM - Shared Drive for SAP mount point - Called S Drive - 100GB
Node2 - Second MSCS node - Standby SCS - 4GB RAM - Shared Drive for SAP mount point -
Called S Drive
Node3 - MS Active Directory - Primary server - 2GB RAM - Local Drive
Node4 - SAP First DI instance - Local Drive F
Node5 - SAP Second DI instance - Local Drive F

You can further enhance this HA architecture more protection.

Enqueue replication server for HA.
LBR to Load balance between available DI instances.
Oracle RAC for HA and scalling purpose.

SAP Services running on the First Node (SUBSCRIBE) and MSCS running on Second Node (ECC6)



Sample Picture - Multiple SAP systems in the MSCS Cluster

Minimum Requirments

1.Shared Drives for Oracle and SAP mount point
2.MSCS installed and fully functional.
3.MS AD installed and fully functional.
4.DNS configured for all hosts
5.Linkd installed and configured if Multiple SAP to be suppored in the MS Cluster.
SAP NetWeaver 2004s SR2 and higher ABAP+Java (kernel 7.00)used.
6.Oracle Fail safe - Optional


SAP services moving to the second Node



SAP services completely moved to the second Node




Steps

All nodes are already configured with Linkd shared mount points.
1.Install Oracle binaries on node1
2.Install SAP SCS on the First node- node2 - instance # 00.S Drive
3.Install ASCS on the First node - node2 - instance # 01. S Drive
3.SAP configuration prepare on the first SCS node - node2
4.Configure the second node for the MSCS - node3
5.Install Database instance - O Drive

Test the Fail over by moving MSCS groups from node2 to node 3. All the services SCS,ASCS and Oracle will be move to the second stand by node2.

6.Install the first DI instance on node5
7.Install the second DI instance on node6
8.Check the SAP connectivity using the two DI with SAP system on Primary (active) cluster.
9.Switch over the SAP system from node2 to node3.repeat the step 8.Since the Network name (say sapcluster) and IP also switches to node3 , there should not be any problem in connecting.
10.Configure the SAP logon group for the two DI instances
11.For more HA you can configure enqueue replication server.
12.Complete the TMS and other initial configurations
13.Backup the SAP instance using BRTOOLS tools

This completes the basic overview about the SAP Active/Passive cluster implementation.

Wednesday, August 5, 2009

SAP Performance Tuning

One needs to understand OS, Networking, Hardware, RDBMS and clustering to perform performance tunning of any ERP product. SAP provides very good transaction codes for monitoring and troubleshooting all these components. Familiarization of these components would help understand the transaction code description which in turn will help to manage SAP system more efficiently.

During the process of analyzing the root cause of performance issue, one has to make sure the CPU is not close to 100% usage, if

- CPU is not close to 100%, then any monitoring tools like Toad, foglight, and Grid control will help in resolving the performance issue. If CPU is 100%, the use of above tools will not help because

1. It is a very time consuming process or
2. CPU may block these tools from connecting

In that case, use of native tools like SQL plus, NIping, Netstat, sar, vmstat, mpstat, top, ps, iostat, msprot, ensmon, dbmon, msmon etc will help.

Even though Oracle provides some features like ADDM and SQL Tuning Advisor for Performance tuning, it would be better to understand the functionality of Optimizer technology.In real time ADDM and SQL Tuning Advisor helped me to tune optimize the performance of Database Batch jobs (which reduced the time from 14 hours to 3 hours).But further manual tuning (using MV and SQL hints) reduced the time to 30 min.

Certain Database (like in Tele industry) is very sensitive and need immediate attention in case of any critical performance issues.This situation can be related to table buffer/lock isse (SAP) or latch/lock/Bad queries (Oracle) or Disk IO queue issue (Operating system > 30%) or message server is not coming up in MSCS environment after the MSCS resource switchover (message server port not released from the old Active host , netstat -a) or MSCS cluster freezing at shared disk level (cluster level - quorom disks).

In general use the given sequence below to start with any troubleshooting issue.

Level 1 --> OS
Level 2 --> Network
Level 3 --> Oracle
Level 4 --> SAP

So root cause analysis involve from level 1 to leve 4 (may be more in case of more layers are used - example Cluster).

From my experience while at one client (Telecommunication) , there was a call drop during peak hours.I started troubleshooting from level 1 and found disk IOQ was > 25 %.At level 2 (Oracle) i found one user was rebulding a huge index and that caused the disk IO to peak. So i had to kill that session (dropping calls would drop the revenue) to bring the IO normal , < 10%. This fixed the call drop issue.Disk IOQ > 20-25% would be expensive for any sensitive databases.It can be ok (> 40%) for any other non sensitive databases.So never allow expensive operations during peak hours and monitor those operations always.

Few useful Transaction codes that helps to troubleshoot the Root cause performance issue
---------------------------------------------------

Level 1 (OS)
------------

AL11 - SAP Directories
OS02 - OS configuration
OS06/ST06 - OS monitor
OSS7 - SAPOSCOL Targets
SM50 - Process Overview

Level 2 (Network)
-----------------

OS01 - LAN Check via ping

Level 3 (Oracle)
---------------

DBA CockPit - Configuration and Maintenance (Wonderful Tcode)
DB01 - Oracle Lock monitor
DB02 - Database Performance
DB03 - Parameter changes at the database level
DB12 - Backup logs
DB17 - Database check condition
DBCO - Database connections
ST04N - Database Performance Monitor
DB20 - Table statistics
DB16 - Databse checks
DB14 - DBA log display

Level 4 (SAP)
------------

AL12 - Buffer monitoring
ST01 - System Trace
ST02 - SAP Memory Monitor
ST03N - System Load monitor
ST05 - Performance Analysis
ST07 - Application Monitor
ST10 - Table Access Statistics
SM50 - Process Overview
SM12 - Lock entries
SM59 - RFC Connections
SMICM - ICM Monitor
SMMS - Message server monitor
Finally CCMS

Hope this helps.

MS SQL Server 2008 Replication - HA

Very intresting topic.Will post soon...

MS SQL Server 2008 Mirroring - HA

Very intresting topic.Will post soon...

MS SQL Server 2008 - Log Shipping - HA

Very intresting topic.Will post soon...

Oracle 11G RMAN New Features

will post soon...

SAP EP 6 installation

Will post soon...

SAP Solution Manager - MSCS HA Configuration

This setup needs better understanding of SAP components. I was not able to publish this in detail step by step.But given a brief introduction and will provide more details if anyone needs.

Requirements -

1.Shared Disks - for SAP mount points (//sapcluster/sapmnt), Oracle Datafiles(including archive files,log files and control files) and Quorom Disks for MSCS cluster services.
2.MSCS Active Directory
3.MSCS DNS configuration
4.MSCS Cluster services
5.Oracle Fail Safe (Optional)
6.Oracle RAC (For better scalling and High availability for the Oracle Database)
Windows 2003 server



6 node configuration for HA setup - NO SPOF for the most of the SAP components

Configuration and Setup in Brief
--------------------------------
1.Node1 - Called Publish.Install and configure MSCS active Directory and DNS.
Domain called sapdomain.com.
2.Node2 - Called Subscribe.Will be part of sapdomain.com domain.This will be first node for the HA for SAP Components.Will be running SCS and ASCS.
3.Node3 - Called Witness.Will be the second node for SCS and ASCS.
4.Node4 - Called repeng.Will be running Replication Enqueue server.Shared Disk S will be mounted as NFS.
5.Node6 - Called Diag1.Will be running Diag SAP services.Installed on Local drive.
6.Node7 - Called Diag2.Will be running Diag SAP Services.Installed on Local Drive.


For more HA , you may separate the Oracle Database from the SAP node and use with RAC feature.Diag instances can be configured with SAP logon groups and may also used with LBR for Load balancing with HA.

Not complete...

Oracle Fail Safe - HA implementation

Will post soon....

Saturday, June 27, 2009

Oracle AS Guard cloning - Add MT instance using Oracle ASGuard

Oracle AS Guard cloning

Please see my previous thread on how to setup the Oracle AS Guard setup.

In this thread i am posting how to install MT instance (Portal and Wireless option) and clone this newly installed MT instance to the standby instance.

Even if the primary node is unavailable (primary node is not reachable), you can still switch over to the seconday node (manual switchover). I have tested this and it perfectly works fine.OracleAS guard is more flexible.

The MT cloning took 10 min and switch over took 3 min.
In case of swithover situation,your cutover time would be approx 3-5 min.

With the help of DNS switch and/or LBR, the FO or SO option will be very transparent to the users/other applications.

Caution -
-------

Be little cautious using this option for your Oracle IAS environment as Active/Passive cluster.Without proper understanding of this process, you may corrupt the AS Guard setup.

This feature has to be completely tested before using it in a production environment.
Steps
-----
1.Keep the primary and standby instances up and running (Both instances running only Infrastructure) - ie Oracle AS Guard is setup and active for the infra instance.
2.Install MT (portal and Wireless option) on to the first instance(Primary)
3.Clone the newly added MT instance to the standby instance (DSA port should same as the primary DSA).start the asgctl from MT instance (not from infra home)
4.If this clone is successfull , veryify the topology and confirm everything is fine.
5.Now test the switchover is possible
6.Switch back to the primary node

on Primary node
===============
After the MT install, stop all the MT opmn process

connect to asgctl

connect to primary database

clone the MT instance to the standby instance.

ASGCTL> clone instance mt.ias1.ushasuji.com to rinfra
Generating default policy for this operation

IAS1:7891
Clone Instance

IAS1:7891 (home /u01/oracle/product/mt)
Running configure option in bkp_restore script.
Running node_backup prepare option in bkp_restore script. This may take a few minutes
Running node_backup image_backup option in bkp_restore script. This may take a few minutes
Please run /u01/oracle/product/mt/backup_restore/brHome12228823ba8.sh as root. Enter "Yes" after the script is completed successfully, "No" otherwise. Yes or No
yes

ias1 192.168.1.16:7891 (home /u01/oracle/product/mt)
Copying backup file "/u01/oracle/product/loha/mt/as_mt.ias1.ushasuji.com_2009-06-28_17-04-03.img" from "192.168.1.15" [192.168.1.15] to "192.168.1.16" at "/u01/oracle/product/loha/mt/as_mt.ias1.ushasuji.com_2009-06-28_17-04-03.img"
Unpacking image backup at target host. This may take a few minutes
Running node_restore sys_init option in bkp_restore script.
Please run /u01/oracle/product/mt/backup_restore/brHome122283ed9fe.sh as root. Enter "Yes" after the script is completed successfully, "No" otherwise. Yes or No
yes
Running node_restore inst_register option in bkp_restore script.
Please run /u01/oracle/product/mt/backup_restore/brHome122283ed9fe.sh as root. Enter "Yes" after the script is completed successfully, "No" otherwise. Yes or No
yes
ASGCTL>

Check everything is copied on to the standby MT home.It may prompt to run the copy script twice as root user.

This clone process will stop the MR database istance and all the Infra/MT opmn process on node1 and starts in node2.I mean the current primary node becomes standby and old standby becomes primary. So easy .... All is taken care automatically by Oracle AG Guard.

If no errors , then you are all to test the switchover.

From Primary Node
-------------------

Now switchover both infrastructure and Mid Tier applications to standby node - Rinfra

ASGCTL> switchover topology to rinfra

IAS1 192.168.1.15:7890 (home /u01/oracle/product/infra)
Connecting to the primary database ias.ushasuji.com
Gathering information from the primary database ias.ushasuji.com

ias1:7891 (home /u01/oracle/product/mt)
Shutting down component HTTP_Server
Shutting down component WebCache
Shutting down component OC4J
Shutting down component dcm-daemon
Shutting down component LogLoader

ias1 192.168.1.16:7890 (home /u01/oracle/product/infra)
Running emctl command: "/u01/oracle/product/infra/bin/emctl status iasconsole".
Shutting down component OID
Shutting down component HTTP_Server
Shutting down component OC4J
Shutting down component dcm-daemon
Shutting down component LogLoader

IAS1:7891 (home /u01/oracle/product/mt)
Shutting down each instance in the topology
Shutting down component HTTP_Server
Shutting down component WebCache
Shutting down component OC4J
Shutting down component dcm-daemon
Shutting down component LogLoader

IAS1 192.168.1.15:7890 (home /u01/oracle/product/infra)
Running emctl command: "/u01/oracle/product/infra/bin/emctl status iasconsole".
Shutting down component OID
Shutting down component HTTP_Server
Shutting down component OC4J
Shutting down component dcm-daemon
Shutting down component LogLoader

IAS1:7891 (home /u01/oracle/product/mt)
Synchronizing topology
Synchronizing each instance in the topology to standby topology
Starting backup of topology ""
Backing up and copying data to the standby topology
Backing up each instance in the topology
Starting backup of instance "mt.ias1.ushasuji.com"
Configuring the backup script

IAS1 192.168.1.15:7890 (home /u01/oracle/product/infra)
Starting backup of instance "infra.ias1"
Configuring the backup script
Oracle Application Server Backup/Recovery Tool 10g (10.1.2.0.2)
Copyright (c) 2004, 2005, Oracle. All rights reserved.

Performing configuration ...
Configuration completed successfully !!!
Backing up the instance configuration files. This may take a few minutes

IAS1:7891 (home /u01/oracle/product/mt)
Backing up the instance configuration files. This may take a few minutes
Deleted directory "/u01/oracle/product/mt/dsa/backup".

IAS1 192.168.1.15:7890 (home /u01/oracle/product/infra)
Deleted directory "/u01/oracle/product/infra/dsa/backup".
Oracle Application Server Backup/Recovery Tool 10g (10.1.2.0.2)
Copyright (c) 2004, 2005, Oracle. All rights reserved.

Backing up configuration files ...
Warning(s) during backup - please check /u01/oracle/product/infra/dsa/backup/log/2009-06-28_17-46-19_config_bkp.log
Configuration backup archive is /u01/oracle/product/infra/dsa/backup/config_bkp_2009-06-28_17-46-19.jar
Configuration backup completed with warning(s) logged in
file /u01/oracle/product/infra/dsa/backup/log/2009-06-28_17-46-19_config_bkp.log

IAS1:7891 (home /u01/oracle/product/mt)
Copying backup file "/u01/oracle/product/mt/dsa/backup/config_bkp_2009-06-28_17-46-19.jar" from "IAS1" [192.168.1.15] to "192.168.1.16" at "/u01/oracle/product/mt/dsa/backup/mt.ias1.ushasuji.com/config_bkp_2009-06-28_17-46-19.jar"

IAS1 192.168.1.15:7890 (home /u01/oracle/product/infra)
Copying backup file "/u01/oracle/product/infra/dsa/backup/config_bkp_2009-06-28_17-46-19.jar" from "IAS1" [192.168.1.15] to "192.168.1.16" at "/u01/oracle/product/infra/dsa/backup/infra.ias1/config_bkp_2009-06-28_17-46-19.jar"
Copying backup catalog file /u01/oracle/product/infra/backup_restore/data/catalog.txt from IAS1 [192.168.1.15] to 192.168.1.16

IAS1:7891 (home /u01/oracle/product/mt)
Copying backup catalog file /u01/oracle/product/mt/backup_restore/data/catalog.txt from IAS1 [192.168.1.15] to 192.168.1.16

IAS1 192.168.1.15:7890 (home /u01/oracle/product/infra)
Completed backup of instance "infra.ias1"

IAS1:7891 (home /u01/oracle/product/mt)
Completed backup of instance "mt.ias1.ushasuji.com"
Starting restore of topology ""
Restoring data to the standby topology
Restoring each instance in the topology

ias1 192.168.1.16:7890 (home /u01/oracle/product/infra)
Copying backup file "/u01/oracle/product/infra/dsa/backup/infra.ias1/config_bkp_2009-06-28_17-46-19.jar" from "192.168.1.16" to "ias1" [192.168.1.16] at "/u01/oracle/product/infra/dsa/backup/config_bkp_2009-06-28_17-46-19.jar"

ias1:7891 (home /u01/oracle/product/mt)
Copying backup file "/u01/oracle/product/mt/dsa/backup/mt.ias1.ushasuji.com/config_bkp_2009-06-28_17-46-19.jar" from "192.168.1.16" to "ias1" [192.168.1.16] at "/u01/oracle/product/mt/dsa/backup/config_bkp_2009-06-28_17-46-19.jar"

ias1 192.168.1.16:7890 (home /u01/oracle/product/infra)
Deleting backup file "/u01/oracle/product/infra/dsa/backup/infra.ias1/config_bkp_2009-06-28_17-46-19.jar"

ias1:7891 (home /u01/oracle/product/mt)
Deleting backup file "/u01/oracle/product/mt/dsa/backup/config_bkp_2009-06-28_17-46-19.jar"
Starting restore of instance "mt.ias1.ushasuji.com"

ias1 192.168.1.16:7890 (home /u01/oracle/product/infra)
Starting restore of instance "infra.ias1"

ias1:7891 (home /u01/oracle/product/mt)
Configuring the backup script

ias1 192.168.1.16:7890 (home /u01/oracle/product/infra)
Configuring the backup script

ias1:7891 (home /u01/oracle/product/mt)
Restoring the instance configuration files. This may take a few minutes

ias1 192.168.1.16:7890 (home /u01/oracle/product/infra)
Oracle Application Server Backup/Recovery Tool 10g (10.1.2.0.2)
Copyright (c) 2004, 2005, Oracle. All rights reserved.

Performing configuration ...
Configuration completed successfully !!!
Restoring the instance configuration files. This may take a few minutes
Oracle Application Server Backup/Recovery Tool 10g (10.1.2.0.2)
Copyright (c) 2004, 2005, Oracle. All rights reserved.

Restoring configuration files from backup 2009-06-28_17-46-19 ...
Configuration file restore completed successfully !!!
Running opmnctl reload command: "/u01/oracle/product/infra/opmn/bin/opmnctl reload".

IAS1 192.168.1.15:7890 (home /u01/oracle/product/infra)
Starting backup/synchronization of database "ias.ushasuji.com"

ias1 192.168.1.16:7890 (home /u01/oracle/product/infra)
Starting restore/synchronization of database "ias.ushasuji.com"

IAS1:7891
Synchronizing topology completed successfully

IAS1 192.168.1.15:7890
Connecting to the primary database ias.ushasuji.com
Gathering information from the primary database ias.ushasuji.com
Switchover - init.

ias1 192.168.1.16:7890
Switchover - init.

IAS1 192.168.1.15:7890
Switchover - primary preparing.

ias1 192.168.1.16:7890
Switchover - standby preparing.
Stopping Job Queue Scheduler.
Stopping Advanced Queue Time Manager.
Starting managed recovery in the standby database.

IAS1 192.168.1.15:7890
Switchover - primary processing.
Active user sessions detected
The primary database is ready to switchover
Switching over the primary database to the standby role
Shutting down the old primary database
Issuing "shutdown immediate;" to shutdown the database
Starting up the old primary database as the new standby
Issuing "startup nomount;" to start the database
Old primary database now running as the new standby

ias1 192.168.1.16:7890
Switchover - standby processing.
Getting the switchover status from the database.
Switching over the standby database to the primary role.
Shutting down the new primary database.
Issuing "shutdown immediate;" to shutdown the database
Starting up the new primary database.
Issuing "startup open;" to start the database
Switchover - standby finishing.
Enabling the archive destination in the new primary database.
Starting log archiving in the new primary database.

IAS1 192.168.1.15:7890
Switchover - primary finishing.
Deferring the archive destination in the new standby database.
Enabling managed recovery in the new standby database.

ias1 192.168.1.16:7890 (home /u01/oracle/product/infra)
Running opmnctl reload command: "/u01/oracle/product/infra/opmn/bin/opmnctl reload".
Starting component OID
Starting component dcm-daemon
Configuring the backup script
Oracle Application Server Backup/Recovery Tool 10g (10.1.2.0.2)
Copyright (c) 2004, 2005, Oracle. All rights reserved.

Performing configuration ...
Configuration completed successfully !!!
Executing restore_config -F DCM-resyncforce option in bkp_restore.pl script
Oracle Application Server Backup/Recovery Tool 10g (10.1.2.0.2)
Copyright (c) 2004, 2005, Oracle. All rights reserved.

Resynchronizing instance with DCM repository ...
Running opmnctl reload command: "/u01/oracle/product/infra/opmn/bin/opmnctl reload".
Executing opmnctl startall command

ias1:7891 (home /u01/oracle/product/mt)
Starting component dcm-daemon
Configuring the backup script
Executing restore_config -F DCM-resyncforce option in bkp_restore.pl script
Executing opmnctl startall command

ias1:7891
HA directory exists for instance mt.ias1.ushasuji.com

ias1 192.168.1.16:7890 (home /u01/oracle/product/infra)
HA directory exists for instance infra.ias1

IAS1:7891
HA directory exists for instance mt.ias1.ushasuji.com

IAS1 192.168.1.15:7890 (home /u01/oracle/product/infra)
HA directory exists for instance infra.ias1

ias1 192.168.1.16:7890
Verifying that the topology is symmetrical in both primary and standby configuration
Switchover topology to standby host completed successfully


From node2 (new primary node)
---------------------------

try to switchover back to the original primary node

Monday, June 22, 2009

OPMN wont start up after successfull installation of Oracle Portal Mid Tier - Release 10.1.2.0.2

After successful installation of Oracle Portal , i stopped the all the IAS components using opmnctl - No errors.
But when i tried to start the opmn proces - during Portal Mid tier clonig, got into this error.

RCV: Transport endpoint is not connected Communication error with the OPMN server local port. Check the OPMN log files


I am even not able to start the OPMN process.

opmnctl start gave me the same error

While troubleshooting the issue, i found almost 35 process linked to MT opmn process.

like ,
/u01/oracle/product/mt/opmn/bin/opmn -d

After killing all these process i am able to start opmn process.


Could be another bug in Oracle iAS

Wednesday, June 17, 2009

Oracle Portal 10.1.2.0.2 Clonig

Will post soon

All about LVM's - Small Demo

All about LVM's
-----------------

Because of the wide technolgy today, i feel DBA should have some knowledge on hardware and OS knowledge.

I am a big fan of LVM's.I have been working on LVM's (including Veritas LVM/Volume manager and Solstice DiskSuite in Solaris ) for the past 10 years.Veritas used to be little complex those days,but any more today.

I had an opportunity to work on Veritas Volume Manager and LVM's (approx 8 years ago , for a client called Auripay at Cambridge).I used to hot swap the SCSI drives and test the VM's groups.It was so intresting to see the VM groups swaping to different nodes.Prasad putta (MD of Auripay , now he is VP at Oatsystems) used to encourage me and given me full freedom on this implementation/testing.Got a very good exposure on veritas there.

I have done all my home testing with LVM's only - not just with regular ext2 FS. Because it gives me the flexibility to add more space to the existing volumes as i needed and additionally i do see much better system performance.


Just i want to share my small Linux LVM experience (quite simple) with Other DBA's

Demo
----------

I created a small device of 8GB size

After login just find out the device name , here it is sdd.

you can use the dmesg command to find this.


SCSI device sdd: 16777216 512-byte hdwr sectors (8590 MB)
sdd: cache data unavailable
sdd: assuming drive cache: write through
SCSI device sdd: 16777216 512-byte hdwr sectors (8590 MB)
sdd: cache data unavailable
sdd: assuming drive cache: write through
sdd: unknown partition table
Attached scsi disk sdd at scsi0, channel 0, id 3, lun 0
Fusion MPT SAS Host driver 3.02.62.01rh
EXT3-fs: INFO: recovery required on readonly filesystem.
EXT3-fs: write access will be enabled during recovery.
kjournald starting. Commit interval 5 seconds
EXT3-fs: sda3: orphan cleanup on readonly fs
ext3_orphan_cleanup: deleting unreferenced inode 115623
EXT3-fs: sda3: 1 orphan inode deleted
EXT3-fs: recovery complete.
EXT3-fs: mounted filesystem with ordered data mode.
SELinux: Disabled at runtime.
SELinux: Unregistering netfilter hooks
inserting floppy driver for 2.6.9-34.EL
Floppy drive(s): fd0 is 1.44M
FDC 0 is a post-1991 82077
vmxnet: module license 'unspecified' taints kernel.
VMware vmxnet virtual NIC driver release 1.0.1 build-29996
ACPI: PCI interrupt 0000:00:11.0[A] -> GSI 10 (level, low) -> IRQ 10
Found vmxnet/PCI at 0x1424, irq 10.
vmxnet: numRxBuffers=(100*24) numTxBuffers=(100*64) driverDataSize=9000
divert: allocating divert_blk for eth0
eth0: vmxnet ether at 0x1424 assigned IRQ 10.
ACPI: PCI interrupt 0000:00:12.0[A] -> GSI 9 (level, low) -> IRQ 9
Found vmxnet/PCI at 0x14a4, irq 9.
vmxnet: numRxBuffers=(100*24) numTxBuffers=(100*64) driverDataSize=9000
divert: allocating divert_blk for eth1
eth1: vmxnet ether at 0x14a4 assigned IRQ 9.
ACPI: PCI interrupt 0000:00:13.0[A] -> GSI 5 (level, low) -> IRQ 5
Found vmxnet/PCI at 0x1824, irq 5.
vmxnet: numRxBuffers=(100*24) numTxBuffers=(100*64) driverDataSize=9000
divert: allocating divert_blk for eth2
eth2: vmxnet ether at 0x1824 assigned IRQ 5.
pcnet32.c:v1.31 29.04.2005 tsbogend@alpha.franken.de
ACPI: PCI interrupt 0000:00:14.0[A] -> GSI 11 (level, low) -> IRQ 11
shpchp: shpc_init : shpc_cap_offset == 0
shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
USB Universal Host Controller Interface driver v2.2
ACPI: PCI interrupt 0000:00:07.2[D] -> GSI 9 (level, low) -> IRQ 9
uhci_hcd 0000:00:07.2: UHCI Host Controller
uhci_hcd 0000:00:07.2: irq 9, io base 00001060
uhci_hcd 0000:00:07.2: new USB bus registered, assigned bus number 1
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 2 ports detected
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
NET: Registered protocol family 10
Disabled Privacy Extensions on device c0378f60(lo)
IPv6 over IPv4 tunneling driver
divert: not allocating divert_blk for non-ethernet device sit0
ip_tables: (C) 2000-2002 Netfilter core team
vmxnet_init_ring: offset=9000 length=9000
vmxnet_init_ring: offset=9000 length=9000
ip_tables: (C) 2000-2002 Netfilter core team
vmxnet_init_ring: offset=9000 length=9000
ACPI: AC Adapter [ACAD] (on-line)
ACPI: Power Button (FF) [PWRF]
eth0: no IPv6 routers present
eth1: no IPv6 routers present
eth2: no IPv6 routers present
EXT3 FS on sda3, internal journal
device-mapper: 4.5.0-ioctl (2005-10-04) initialised: dm-devel@redhat.com
cdrom: open failed.
kjournald starting. Commit interval 5 seconds
EXT3 FS on sda1, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
Adding 1534196k swap on /dev/sda2. Priority:-1 extents:1
IA-32 Microcode Update Driver: v1.14
microcode: No new microdata for cpu 0
IA-32 Microcode Update Driver v1.14 unregistered
parport0: PC-style at 0x378 [PCSPP,TRISTATE]
ip_tables: (C) 2000-2002 Netfilter core team
ip_tables: (C) 2000-2002 Netfilter core team
ip_tables: (C) 2000-2002 Netfilter core team
ip_tables: (C) 2000-2002 Netfilter core team
iscsi-sfnet: Loading iscsi_sfnet version 4:0.1.11-2
iscsi-sfnet: Control device major number 254
OCFS2 Node Manager 1.2.3 Wed Jul 26 12:04:10 PDT 2006 (build 56074a7e99f767e0530 6907521c8ea25)
OCFS2 DLM 1.2.3 Wed Jul 26 12:04:10 PDT 2006 (build 05157a797e82010a31dfd4c78548 4fe9)
OCFS2 DLMFS 1.2.3 Wed Jul 26 12:04:11 PDT 2006 (build 05157a797e82010a31dfd4c785 484fe9)
OCFS2 User DLM kernel interface loaded
i2c /dev entries driver
ASM: oracleasmfs mounted with options:
ASM: maxinstances=0
SCSI device sdb: 167772160 512-byte hdwr sectors (85899 MB)
sdb: cache data unavailable
sdb: assuming drive cache: write through
sdb: unknown partition table
SCSI device sdc: 96468992 512-byte hdwr sectors (49392 MB)
sdc: cache data unavailable
sdc: assuming drive cache: write through
sdc: unknown partition table
SCSI device sdd: 16777216 512-byte hdwr sectors (8590 MB)
sdd: cache data unavailable
sdd: assuming drive cache: write through
sdd: unknown partition table
parport0: PC-style at 0x378 [PCSPP,TRISTATE]
lp0: using parport0 (polling).
lp0: console ready
ip_tables: (C) 2000-2002 Netfilter core team
vmxnet_init_ring: offset=9000 length=9000
eth0: no IPv6 routers present
[root@Apps3 ~]# fdisk /dev/sdd
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Now create 2 partitions using fdisk utility
-------------------------------------------

The number of cylinders for this disk is set to 1044.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/sdd: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044): +4000M

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (488-1044, default 488):
Using default value 488
Last cylinder or +size or +sizeM or +sizeK (488-1044, default 1044):
Using default value 1044

Command (m for help): p

Disk /dev/sdd: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 487 3911796 83 Linux
/dev/sdd2 488 1044 4474102+ 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

First Create a physical Volume on first partition
-------------------------------------------------

[root@Apps3 ~]# pvcreate /dev/sdd1 ---> First Partition
Physical volume "/dev/sdd1" successfully created

next create a Volume group on top of the physical volume just (/dev/sdd1) created
--------------------------------------------------------------------------------
[root@Apps3 ~]# vgcreate volgrp_1 /dev/sdd1
Volume group "volgrp_1" successfully created

See the Phyical volume structure
---------------------------------

[root@Apps3 ~]# pvdisplay /dev/sdd1
--- Physical volume ---
PV Name /dev/sdd1
VG Name volgrp_1
PV Size 3.73 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 954
Free PE 954
Allocated PE 0
PV UUID v3tkkp-Wsa2-yk3s-lhwG-wlkn-F8g5-2GcfGS

View the Volume group structure just created
-------------------------------------------

[root@Apps3 ~]# vgdisplay volgrp_1
--- Volume group ---
VG Name volgrp_1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.73 GB
PE Size 4.00 MB
Total PE 954
Alloc PE / Size 0 / 0
Free PE / Size 954 / 3.73 GB
VG UUID zUAAMY-9Z7L-4CsQ-HUKK-p37P-TlvR-XzfUKW

[root@Apps3 ~]# fdisk /dev/sdd -l

Disk /dev/sdd: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 487 3911796 83 Linux
/dev/sdd2 488 1044 4474102+ 83 Linux

To increase the VG size, just add the partition /dev/sdd2 to the existing volume group
-----------------------------------------------------------------------------------------
[root@Apps3 ~]# pvcreate /dev/sdd2 --> create this before adding
Physical volume "/dev/sdd2" successfully created
[root@Apps3 ~]# vgextend volgrp_1 /dev/sdd2
Volume group "volgrp_1" successfully extended

Now you see the Volume group size increased from 4GB to 8GB
------------------------------------------------------------
[root@Apps3 ~]# vgdisplay volgrp_1
--- Volume group ---
VG Name volgrp_1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 7.99 GB
PE Size 4.00 MB
Total PE 2046
Alloc PE / Size 0 / 0
Free PE / Size 2046 / 7.99 GB
VG UUID zUAAMY-9Z7L-4CsQ-HUKK-p37P-TlvR-XzfUKW

Now remove the physical volume from the extended volume group
-------------------------------------------------------------
[root@Apps3 ~]# vgreduce volgrp_1 /dev/sdd2
Removed "/dev/sdd2" from volume group "volgrp_1"

See the size is reduced to 4GB now
----------------------------------

[root@Apps3 ~]# vgdisplay volgrp_1
--- Volume group ---
VG Name volgrp_1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.73 GB
PE Size 4.00 MB
Total PE 954
Alloc PE / Size 0 / 0
Free PE / Size 954 / 3.73 GB
VG UUID zUAAMY-9Z7L-4CsQ-HUKK-p37P-TlvR-XzfUKW

Create a logical volume now on top of the volume group - volgrp_1 size 200MB
------------------------------------------------------

[root@Apps3 ~]# lvcreate -n logvol_1 --size 200M volgrp_1
Logical volume "logvol_1" created

See the structure of LVM just created
------------------------------------

[root@Apps3 ~]# lvdisplay /dev/volgrp_1/logvol_1
--- Logical volume ---
LV Name /dev/volgrp_1/logvol_1
VG Name volgrp_1
LV UUID jwQVYp-0K5H-iJLW-JO3n-CPn4-nnyI-ibglYZ
LV Write Access read/write
LV Status available
# open 0
LV Size 200.00 MB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0

[root@Apps3 ~]# vgdisplay volgrp_1|grep "Total PE"
Total PE 954

Now you are ready to create any FS on to this LVM - here we are creating ext2 FS
---------------------------------------------------------------------------------
[root@Apps3 ~]# mkfs /dev/volgrp_1/logvol_1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
51200 inodes, 204800 blocks
10240 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
25 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729

Writing inode tables: done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 32 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

mount the LVM -
--------------
[root@Apps3 ~]# mount /dev/volgrp_1/logvol_1 /u01
[root@Apps3 ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 3241580 2404772 672144 79% /
/dev/sda1 396623 13530 362612 4% /boot
none 676380 0 676380 0% /dev/shm
/dev/mapper/volgrp_1-logvol_1
198337 1550 186547 1% /u01


All set to create any Oracle installations on top of this LVM now.

Hope this helps

Oracle R12 upgrade

Will briefly cover the upgrade of EBUS R11i to R12.

Oracle R12 and Oracle IAS integration

Will post soon....

Oracle R12 Forms Customization

Will post soon...

Oracle LDAP Replication Setup - HA

Will post soon....

SAP ECC 6 - moving to RAC - HA Setup

Will post soon....

Oracle 11G Active DataGuard and Active Duplicate database setup

Will post soon...

Oracle R12 and MAA configuration

Will post soon....

Oracle R12 and RAC implementation - HA setup

Will post soon....

MS SQL Server and Oracle Architecture comparison

Will compare Oracle and MS SQL Server ....

SAP ECC 6 Cloning

Will post soon....

Oracle EBUS R12 HA Setup 2 node Apps with LBR and PCP configuration

Will post soon....

Oracle IAS DataGuard HA Implementation - Oracle IAS 10G R2 10.1.2.0.2

Oracle IAS DataGuard Implementation - Oracle IAS 10G R2 10.1.2.0.2

I was searching the google to find any help on this setup.But i couldn't find any - not even a single post.Really surprised.
Anyhow this configuration was very useful and no more worries about the IAS configuration corruption issues.

Installation overview:
---------------------

1.Set up the Virtual server name for both nodes.I used Redhat cluster suite for this purpose.
2.Install IAS (Infrastucture only) on the first node (ias1)
3.check all the OPMN components are up and running
4.Backup the entire node1 configuration
5.Copy the portlists.ini to the second node
6.Install IAS (Infrastucture only) using the portlists.ini file
7.check all the OPMN components are up and running
8.Backup the entire node2 configuration
9.start DG control utility on node1
10.connect as ias_admin and connect to the primary database (node1)
11.dump policies
12.Verify the topologies on primary and secondary instances
13.Initiate the standby topology on node2.This will backup the configuration file (using bkup_restore utility) and prepare both instances for the DG setup.Also
convert both instances into archive mode.Also the secondary shutdown and kept in mount stage (not opened). ASG does all for you.
14.BAckup the entire system node1 and node2
15.Test the switch over to node2
16.Test the switch over to node1

You have completed the IAS DG setup.

This setup includes only the Infrastucture and not any of the MID Tiers like BI or Portal/Wireless.But the setup is the same for all configurations.

NOTE - Be cautious while maintaing this environment.Virtual IP conflict may mess the startup of the OPMN components and some of OPMN process may not startup.I had some bad time on this.


On Node1
---------


ASGCTL> connect asg ias_admin/iasadmin1
Successfully connected to IAS1:7892
ASGCTL> discover topology oidhost=infra.ushasuji.com oidsslport=636 oidpassword=iasadmin1
Discovering topology on host "IAS1" with IP address "192.168.1.15"

IAS1:7892
Connecting to the OID server on host "infra.ushasuji.com" using SSL port "636" and username "orcladmin"
Getting the list of databases from OID
Gathering database information for SID "ias" from host "infra"
Getting the list of instances from OID
Gathering instance information for "infra.infra" from host "infra"
The topology has been discovered. A topology.xml file has been written to each home in the topology.

ASGCTL> dump polices

dump policies

Display the topology information

Below is an example of dump topology:

connect ASG host_foo ias_admin/pass
dump topology



ASGCTL> verify topology with rinfra
Generating default policy for this operation

IAS1:7892
HA directory exists for instance infra.infra

ias2:7892
HA directory exists for instance infra.infra

IAS1:7892
Verifying that the topology is symmetrical in both primary and standby configuration
ASGCTL> set trace on all

ASGCTL> instantiate topology to rinfra.ushasuji.com using policy /u01/oracle/product/infra/dsa/conf/verify_policy.xml

IAS1:7892
Instantiating each instance in the topology to standby topology
HA directory exists for instance infra.infra

ias2:7892
HA directory exists for instance infra.infra

IAS1:7892
Verifying that the topology is symmetrical in both primary and standby configuration

IAS1:7892 (home /u01/oracle/product/infra)
This is primary infrastructure host
Connecting to the primary database ias.ushasuji.com
Gathering information from the primary database ias.ushasuji.com

ias2:7892 (home /u01/oracle/product/infra)
Shutting down each instance in the topology
Shutting down component OID
Shutting down component HTTP_Server
Shutting down component OC4J
Shutting down component dcm-daemon
Shutting down component LogLoader
This is standby infrastructure host
Deleting the standby database ias.ushasuji.com
Shutting down the standby database ias.ushasuji.com
Issuing "shutdown immediate;" to shutdown the database

IAS1:7892 (home /u01/oracle/product/infra)
Creating a standby template

IAS1:7892
Connecting to the primary database ias.ushasuji.com
Gathering information from the primary database ias.ushasuji.com
Creating physical standby database - prepare phase
Setting db and log file name convert
* The prepare phase was run previously. Redoing. ***

ias2:7892
Setting db and log file name convert

IAS1:7892
Ensuring database "ias.ushasuji.com" is in ARCHIVELOG mode.
Querying primary database for data files.
Creating Standby database parameter file "/u01/oracle/product/infra/dbs/tmp_initias.ora".

ias2:7892
Creating Standby database instance "ias".
Verifying datafile location on standby host.
Updating net service entry for "ias.ushasuji.com" in tnsnames file.
Updating net service listener entry for "ias.ushasuji.com" in listener file.
Updating net service entry for "ias_remote1.ushasuji.com" in tnsnames file.

IAS1:7892
Successfully completed Prepare task for Create Physical Standby.
Creating physical standby database - copy phase
* The copy phase was run previously. Redoing. ***

ias2:7892
Checking if standby database is running

IAS1:7892
Querying primary database for data files.
Shutting down the primary database
This operation requires the database to be shutdown. Do you want to continue? Yes or No
yes
Issuing "shutdown immediate;" to shutdown the database
Issuing "startup mount ;" to start the database
Creating Standby database control file "/u01/oracle/product/infra/dbs/tmp_ias.ctl".
Shutting down the primary database
Issuing "shutdown immediate;" to shutdown the database
Copying database datafiles to the standby host
Issuing "startup open ;" to start the database
Successfully completed Copy task for Create Physical Standby.
Creating physical standby database - finish phase
Create Physical Standby:Finish - Init.

ias2:7892
Create Physical Standby:Finish - Init.

IAS1:7892
Create Physical Standby:Finish - Prepare primary.
Saving redo log information for standby server

ias2:7892
Create Physical Standby:Finish - Configure standby.
Creating directories for dump and trace
Connecting to standby database
Creating a spfile for standby database
Starting the standby database
Issuing "startup nomount ;" to start the database
Creating standby redo log
Adding log archive destination to the parameter file
Starting managed recovery
Making sure that log is being applied to standby database

IAS1:7892
Create Physical Standby:Finish - Configure primary.
Verifying access to standby database
Adding log archive destination in the parameter file
Performing a log switch

ias2:7892
Verifying log application

IAS1:7892
Successfully completed Finish task for Create Physical Standby.

IAS1:7892 (home /u01/oracle/product/infra)
Synchronizing topology
Synchronizing each instance in the topology to standby topology
Starting backup of topology ""
Backing up and copying data to the standby topology
Backing up each instance in the topology
Starting backup of instance "infra.infra"
Configuring the backup script
Backing up the instance configuration files. This may take a few minutes
Deleted directory "/u01/oracle/product/infra/dsa/backup".
Copying backup file "/u01/oracle/product/infra/dsa/backup/config_bkp_2009-06-13_00-38-35.jar" from "IAS1" [192.168.1.15] to "192.168.1.26" at "/u01/oracle/product/infra/dsa/backup/infra.infra/config_bkp_2009-06-13_00-38-35.jar"
Copying backup catalog file /u01/oracle/product/infra/backup_restore/data/catalog.txt from IAS1 [192.168.1.15] to 192.168.1.26
Completed backup of instance "infra.infra"
Starting restore of topology ""
Restoring data to the standby topology
Restoring each instance in the topology

ias2:7892 (home /u01/oracle/product/infra)
Copying backup file "/u01/oracle/product/infra/dsa/backup/infra.infra/config_bkp_2009-06-13_00-38-35.jar" from "192.168.1.26" to "ias2" [192.168.1.16] at "/u01/oracle/product/infra/dsa/backup/config_bkp_2009-06-13_00-38-35.jar"
Deleting backup file "/u01/oracle/product/infra/dsa/backup/config_bkp_2009-06-13_00-38-35.jar"
Starting restore of instance "infra.infra"
Configuring the backup script
Restoring the instance configuration files. This may take a few minutes

IAS1:7892 (home /u01/oracle/product/infra)
Starting backup/synchronization of database "ias.ushasuji.com"

ias2:7892 (home /u01/oracle/product/infra)
Starting restore/synchronization of database "ias.ushasuji.com"
Synchronizing topology completed successfully

IAS1:7892
Synchronizing topology completed successfully
ASGCTL>


Switch Over to Node2
----------------------


ASGCTL> connect asg ias_admin/iasadmin1
Successfully connected to IAS1:7892
ASGCTL> set primary database sys/sys2@ias
Checking connection to database ias
ASGCTL> switchover topology to rinfra.ushasuji.com
Generating default policy for this operation

IAS1:7892
Switchover each instance in the topology to standby topology

IAS1:7892 (home /u01/oracle/product/infra)
Connecting to the primary database ias.ushasuji.com
Gathering information from the primary database ias.ushasuji.com

ias2:7892 (home /u01/oracle/product/infra)
Shutting down each instance in the topology
Shutting down component OID
Shutting down component HTTP_Server
Shutting down component OC4J
Shutting down component dcm-daemon
Shutting down component LogLoader

IAS1:7892 (home /u01/oracle/product/infra)
Shutting down each instance in the topology
Shutting down component OID
Shutting down component HTTP_Server
Shutting down component OC4J
Shutting down component dcm-daemon
Shutting down component LogLoader
Synchronizing topology
Synchronizing each instance in the topology to standby topology
Starting backup of topology ""
Backing up and copying data to the standby topology
Backing up each instance in the topology
Starting backup of instance "infra.infra"
Configuring the backup script
Backing up the instance configuration files. This may take a few minutes
Deleted directory "/u01/oracle/product/infra/dsa/backup".
Copying backup file "/u01/oracle/product/infra/dsa/backup/config_bkp_2009-06-13_01-03-34.jar" from "IAS1" [192.168.1.15] to "192.168.1.26" at "/u01/oracle/product/infra/dsa/backup/infra.infra/config_bkp_2009-06-13_01-03-34.jar"
Copying backup catalog file /u01/oracle/product/infra/backup_restore/data/catalog.txt from IAS1 [192.168.1.15] to 192.168.1.26
Completed backup of instance "infra.infra"
Starting restore of topology ""
Restoring data to the standby topology
Restoring each instance in the topology

ias2:7892 (home /u01/oracle/product/infra)
Copying backup file "/u01/oracle/product/infra/dsa/backup/infra.infra/config_bkp_2009-06-13_01-03-34.jar" from "192.168.1.26" to "ias2" [192.168.1.16] at "/u01/oracle/product/infra/dsa/backup/config_bkp_2009-06-13_01-03-34.jar"
Deleting backup file "/u01/oracle/product/infra/dsa/backup/config_bkp_2009-06-13_01-03-34.jar"
Starting restore of instance "infra.infra"
Configuring the backup script
Restoring the instance configuration files. This may take a few minutes

IAS1:7892 (home /u01/oracle/product/infra)
Starting backup/synchronization of database "ias.ushasuji.com"

ias2:7892 (home /u01/oracle/product/infra)
Starting restore/synchronization of database "ias.ushasuji.com"
Synchronizing topology completed successfully

IAS1:7892
Synchronizing topology completed successfully

IAS1:7892 (home /u01/oracle/product/infra)
Creating a standby template

IAS1:7892
Connecting to the primary database ias.ushasuji.com
Gathering information from the primary database ias.ushasuji.com
Switching over standby database
Switchover - init.

ias2:7892
Switchover - init.

IAS1:7892
Switchover - primary preparing.
Creating standby redo logs in the primary database

ias2:7892
Switchover - standby preparing.
Stopping Job Queue Scheduler.
Stopping Advanced Queue Time Manager.
Starting managed recovery in the standby database.

IAS1:7892
Switchover - primary processing.
Recycling the primary database because switchover status is SESSIONS ACTIVE
Shutting down the primary database ias.ushasuji.com
Issuing "shutdown immediate;" to shutdown the database
Starting the primary database ias.ushasuji.com
Issuing "startup restrict ;" to start the database
Stopping Job Queue Scheduler in the primary database
Stopping Advanced Queue Time Manager in the primary database
The primary database is ready to switchover
Switching over the primary database to the standby role
Shutting down the old primary database
Issuing "shutdown immediate;" to shutdown the database
Starting up the old primary database as the new standby
Issuing "startup nomount ;" to start the database

ias2:7892
Switchover - standby processing.
Getting the switchover status from the database.
Switching over the standby database to the primary role.
Shutting down the new primary database.
Issuing "shutdown immediate;" to shutdown the database
Starting up the new primary database.
Issuing "startup open ;" to start the database
Switchover - standby finishing.
Enabling the archive destination in the new primary database.
Starting log archiving in the new primary database.

IAS1:7892
Switchover - primary finishing.
Deferring the archive destination in the new standby database.
Enabling managed recovery in the new standby database.

ias2:7892 (home /u01/oracle/product/infra)
Starting each instance in the topology
Starting component OID
Starting component dcm-daemon
Configuring the backup script
Executing restore_config -F DCM-resyncforce option in bkp_restore.pl script
Executing opmnctl startall command

ias2:7892
HA directory exists for instance infra.infra

IAS1:7892
HA directory exists for instance infra.infra

ias2:7892
Verifying that the topology is symmetrical in both primary and standby configuration
ASGCTL>


Verify the IAS components on node 1 - All process will be down except the DSA process
-----------------------------------

[oracle@IAS1 ~]$ $ORACLE_HOME/opmn/bin/opmnctl status

Processes in Instance: infra.infra
-------------------+--------------------+---------+---------
ias-component | process-type | pid | status
-------------------+--------------------+---------+---------
DSA | DSA | 24993 | Alive
LogLoader | logloaderd | N/A | Down
dcm-daemon | dcm-daemon | N/A | Down
OC4J | OC4J_SECURITY | N/A | Down
HTTP_Server | HTTP_Server | N/A | Down
OID | OID | N/A | Down

Verify the IAS components on node 2
----------------------------------

[oracle@ias2 ~]$ /u01/oracle/product/infra/opmn/bin/opmnctl status

Processes in Instance: infra.infra
-------------------+--------------------+---------+---------
ias-component | process-type | pid | status
-------------------+--------------------+---------+---------
DSA | DSA | 31526 | Alive
LogLoader | logloaderd | N/A | Down
dcm-daemon | dcm-daemon | 12489 | Alive
OC4J | OC4J_SECURITY | 22594 | Alive
HTTP_Server | HTTP_Server | 22596 | Alive
OID | OID | 12385 | Alive


Once the switch over is complete, you can either failover or switch over back to Node1


Hope this helps.