Thursday, July 30, 2009

Configuring OMF in Oracle 10g

Configuring Oracle 10g for OMF

To use OMF you must set the db_create_file_dest parameter. One set, tablespace creation becomes super-easy:

SQL> alter system set db_create_file_dest=’c:\oracle\oradata\diogenes\’;

System altered.

SQL> create tablespace test;

Tablespace created.

Now, let’s look at the result of this command by querying the dba_tablespaces view.

FILE_NAME TABLESPACE BYTES
-------------------------------------------------- --------------- ------------
C:\ORACLE\ORADATA\DIOGENES\ORA_TEST_YNJ2K200.DBF TEST 104,857,600


Here we see that OMF created the file as follows:

* File location – c:\oracle\oradata\diogenes

* File name – ora_test_ynj2k200.dbf

* File size – 100 megabytes

The 100 megabyte file size cannot be overridden at the command line, and you can only specify the file size if you also specify the file name and location in the datafile clause.

We also see that Oracle has enhanced the Oracle 10g alert log to display messages about our operation. To see the alert log, we must go to our background_dump_destination directory

SQL> show parameter background_dump

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
background_dump_dest string C:\oracle\admin\diogenes\bdump

Now that we know the location of the alert log, we go to that directory, issue a “dir” command, and we can see the alert log file, named diogenesALRT.LOG (Figure 1)

Figure 1 – Locating your Oracle alert log

Now, when we check the last few lines of the alert log we see that Oracle 10g has logged our OMF operations:

Wed Jul 31 12:02:30 2002
ALTER SYSTEM SET db_create_file_dest='c:\oracle\oradata\diogenes' SCOPE=BOTH;
Wed Jul 31 12:02:42 2002
create tablespace test
Wed Jul 31 12:02:47 2002
Created Oracle managed file C:\ORACLE\ORADATA\DIOGENES\ORA_TEST_YNJ2K200.DBF
Completed: create tablespace test
Wed Jul 31 12:08:26 2002
drop tablespace test
Wed Jul 31 12:08:26 2002
Deleted Oracle managed file C:\ORACLE\ORADATA\DIOGENES\ORA_TEST_YNJ2K200.DBF
Completed: drop tablespace test

Using OMF with online redo logs

Oracle 10g has also introduced the ability to use OMF with online redo log files. This feature is especially useful and removes the tedium from multiplexing and sizing the redo logs. This is done by setting the db_create_online_log_dest_1 through db_create_online_log_dest_5 parameters. The one through five notation allows you to specify up to five multiplexed copies of the online redo log file.

Because the redo logs are allocated at Oracle database creation time, these parameters must be set in the init.ora file prior to creating the database. When multiplexing, we also want to segregate the online redo logs onto separate disks as protection against disk failure. In this UNIX example, the mount points u01, u02, u03 and u04 all map to different disk spindles.

Sample init.ora file

db_create_online_log_dest_1 = ‘/u01/oracle/oradata/diogenes’
db_create_online_log_dest_2 = ‘/u02/oracle/oradata/diogenes’
db_create_online_log_dest_3 = ‘/u03/oracle/oradata/diogenes’
db_create_online_log_dest_4 = ‘/u04/oracle/oradata/diogenes’

Now Oracle database creation syntax is greatly simplified. Before, we had to specify the size and location of the redo logs at Oracle database creation time:

create database
"diogenes"
maxinstances 1
maxlogfiles 16
maxloghistory 226
maxlogmembers 2
maxdatafiles 30
noarchivelog
character set "US7ASCII"
SET TIME_ZONE = 'PST';
datafile
'c:\oracle\oradata\system01.dbf' size 246M
logfile
group 1 'c:\oracle\oradata\log01.dbf' size 50K,
group 2 'c:\oracle\oradata\log02.dbf' size 50K,
group 3 'c:\oracle\oradata\log03.dbf' size 50K
;


Now, OMF takes care of the details:

create database
"diogenes"
maxinstances 1
maxlogfiles 16
maxloghistory 226
maxlogmembers 2
maxdatafiles 30
noarchivelog
character set "US7ASCII"
SET TIME_ZONE = 'PST';
datafile
'c:\oracle\oradata\system01.dbf' size 246M
logfile
group 1,
group 2,
group 3
;


Conclusions about OMF


In sum, OMF is great for vendor-based application whereby scripts can be sent to all Oracle customers without regard for their specific file configuration. The downside to OMF is that seasoned Oracle database professional to not like to uniform file sizes and obtuse file names.



For more details, see the "Easy Oracle Series" a set of books especially designed by Oracle experts to get you started fast with Oracle database technology.

Configuring Logminer in Oracle for DDL & DML

Tracking DDL Statements

LogMiner automatically builds its own internal dictionary from the source dictionary that you specify when you start LogMiner (either a flat file dictionary, a dictionary in the redo logs, or an online catalog).

If your source dictionary is a flat file dictionary or a dictionary in the redo logs, you can use the DDL_DICT_TRACKING option to direct LogMiner to track data definition language (DDL) statements. DDL tracking is disabled by default. To enable it, use the OPTIONS parameter to specify DDL_DICT_TRACKING when you start LogMiner. For example:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -
2 DBMS_LOGMNR.DDL_DICT_TRACKING);

With this option set, LogMiner applies any DDL statements seen in the redo logs to its internal dictionary. For example, to see all the DDLs executed by user SYS, you could issue the following query:

SQL> SELECT USERNAME, SQL_REDO
2 FROM V$LOGMNR_CONTENTS
3 WHERE USERNAME = 'SYS' AND OEPRATION = 'DDL';

The information returned might be similar to the following, although the actual information and how it is displayed will be different on your screen.

USERNAME SQL_REDO
SYS ALTER TABLE SCOTT.ADDRESS ADD CODE NUMBER;
SYS CREATE USER KATHY IDENTIFIED BY VALUES 'E4C8B920449B4C32' DEFAULT
TABLESPACE TS1;

Keep the following in mind when you use the DDL_DICT_TRACKING option:

* The DDL_DICT_TRACKING option is not valid with the DICT_FROM_ONLINE_CATALOG option.
* The DDL_DICT_TRACKING option requires that the database be open.

The ability to track DDL statements helps you monitor schema evolution because SQL statements used to change the logical structure of a table (because of DDL operations such as adding or dropping of columns) can be reconstructed. In addition, data manipulation language (DML) operations performed on new tables created after the dictionary was extracted can also be shown.

Only Committed Transactions

When you use the COMMITTED_DATA_ONLY option, only rows belonging to committed transactions are shown in the V$LOGMNR_CONTENTS view. This enables you to filter out rolled back transactions, transactions that are in progress, and internal operations.

To enable this option, you specify it when you start LogMiner, as follows:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -
2 DBMS_LOGMNR.COMMITTED_DATA_ONLY);

When you specify the COMMITTED_DATA_ONLY option, LogMiner groups together all DML operations that belong to the same transaction. Transactions are returned in the order in which they were committed.

If long-running transactions are present in the redo logs being analyzed, use of this option may cause an "Out of Memory" error.

The default is for LogMiner to show rows corresponding to all transactions and to return them in the order in which they are encountered in the redo logs.

For example, suppose you start LogMiner without specifying COMMITTED_DATA_ONLY and you execute the following query:

SQL> SELECT (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
2 USERNAME AS USER,
3 SQL_REDO AS SQL_REDO
4 FROM V$LOGMNR_CONTENTS;

The output would be as follows. Both committed and uncommitted transactions are returned and rows from different transactions are interwoven.

XID USER SQL_REDO
1.5.123 SCOTT SET TRANSACTION READ WRITE;
1.5.123 SCOTT INSERT INTO "SCOTT"."EMP"("EMPNO","ENAME")
VALUES (8782, 'Frost');
1.6.124 KATHY SET TRANSACTION READ WRITE;
1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (8839, 'Cummings', '415-321-1234');
1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (7934, 'Yeats', '033-334-1234');
1.5.123 SCOTT INSERT INTO "SCOTT"."EMP" ("EMPNO","ENAME")
VALUES (8566, 'Browning');
1.6.124 KATHY COMMIT;
1.7.234 GOUTAM SET TRANSACTION READ WRITE;
1.5.123 SCOTT COMMIT;
1.7.234 GOUTAM INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (8499, 'Emerson', '202-334-1234');

Wednesday, July 29, 2009

Quick Switchover with Physical Standby Database

Once the standby database is setup using Data Guard and works properly, you may want to test switchover, or perform switchover to reduce primary database downtime during OS upgrades or hardware upgrades.

A switchover allows the primary database to switch roles with its standby database. There is no data loss during a switchover. You can switch back to the original Primary database later by performing another switchover.

In case of primary database failure, you will need to perform failover to transition the standby database to the primary role. After a failover, the original primary database can no longer participate in the Data Guard configuration. So if the original Primary database is still accessible, you should always consider a switchover first.

This document only talks about switchover involving physical standby database. In this example, the original primary data is called PRIM and the original standby database is called STAN.

I. Before Switchover:

1. As I always recommend, test the Switchover first on your testing systems before working on Production.

2. Verify the primary database instance is open and the standby database instance is mounted.

3. Verify there are no active users connected to the databases.

4. Make sure the last redo data transmitted from the Primary database was applied on the standby database. Issue the following commands on Primary database and Standby database to find out:
SQL>select sequence#, applied from v$archvied_log;
Perform SWITCH LOGFILE if necessary.

In order to apply redo data to the standby database as soon as it is received, use Real-time apply.

II. Quick Switchover Steps

1. Initiate the switchover on the primary database PRIM:
SQL>connect /@PRIM as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

2. After step 1 finishes, Switch the original physical standby db STAN to primary role;
Open another prompt and connect to SQLPLUS:
SQL>connect /@STAN as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

3. Immediately after issuing command in step 2, shut down and restart the former primary instance PRIM:
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP MOUNT;

4. After step 3 completes:
- If you are using Oracle Database 10g release 1, you will have to Shut down and restart the new primary database STAN.
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP;

- If you are using Oracle Database 10g release 2, you can open the new Primary database STAN:
SQL>ALTER DATABASE OPEN;

STAN is now transitioned to the primary database role.

5. On the new primary database STAN, perform a SWITCH LOGFILE to start sending redo data to the standby database PRIM.
SQL>ALTER SYSTEM SWITCH LOGFILE;

Reference:
1. Oracle Data Guard Concepts and Administration 10g Release 2 (10.2), B14239-04;
2. Switchover and Failover Best Practices: Oracle Data Guard 10g release 2 by Oracle

Resizing / Recreating Online Redo Log Files

One of the best ways I have found to resize or recreate online redo log files and keep the current sequence is to perform it online. In this example, we will resize all online redo logs from 100MB to 250MB while the database is running and use SQL*Plus to drop/recreate them in stages.

Before looking at the tasks involved to perform the resize, let's look at the current online redo log groups and their sizes:

SQL> SELECT a.group#, a.member, b.bytes
2 FROM v$logfile a, v$log b WHERE a.group# = b.group#;

GROUP# MEMBER BYTES
---------- ---------------------------------------- ------------
1 /u03/app/oradata/ORA920/redo_g01a.log 104,857,600
1 /u04/app/oradata/ORA920/redo_g01b.log 104,857,600
1 /u05/app/oradata/ORA920/redo_g01c.log 104,857,600
2 /u03/app/oradata/ORA920/redo_g02a.log 104,857,600
2 /u04/app/oradata/ORA920/redo_g02b.log 104,857,600
2 /u05/app/oradata/ORA920/redo_g02c.log 104,857,600
3 /u03/app/oradata/ORA920/redo_g03a.log 104,857,600
3 /u04/app/oradata/ORA920/redo_g03b.log 104,857,600
3 /u05/app/oradata/ORA920/redo_g03c.log 104,857,600

9 rows selected.

Now let's take a look at the steps involved to resize / recreate all online redo log groups:


  1. Make the last redo log CURRENT

    Force a log switch until the last redo log is marked "CURRENT" by issuing the following command:

    SQL> select group#, status from v$log;

    GROUP# STATUS
    ---------- ----------------
    1 CURRENT
    2 INACTIVE
    3 INACTIVE

    SQL> alter system switch logfile;

    SQL> alter system switch logfile;

    SQL> select group#, status from v$log;

    GROUP# STATUS
    ---------- ----------------
    1 INACTIVE
    2 INACTIVE
    3 CURRENT

  2. Drop first redo log

    After making the last online redo log file the CURRENT one, drop the first online redo log:

    SQL> alter database drop logfile group 1;

    Database altered.
  3. Re-create dropped online redo log group

    Re-create the dropped redo log group with different size (if desired):

    SQL> alter database add logfile group 1 (
    2 '/u03/app/oradata/ORA920/redo_g01a.log',
    3 '/u04/app/oradata/ORA920/redo_g01b.log',
    4 '/u05/app/oradata/ORA920/redo_g01c.log') size 250m reuse;

    Database altered.

  4. Force another log switch

    After re-creating the online redo log group, force a log switch. The online redo log group just created should become the "CURRENT" one:

    SQL> select group#, status from v$log;

    GROUP# STATUS
    ---------- ----------------
    1 UNUSED
    2 INACTIVE
    3 CURRENT

    SQL> alter system switch logfile;

    SQL> select group#, status from v$log;

    GROUP# STATUS
    ---------- ----------------
    1 CURRENT
    2 INACTIVE
    3 ACTIVE

  5. Loop back to Step 2 until all logs are rebuilt

    After re-creating an online redo log group, continue to re-create (or resize) all online redo log groups until all of them are rebuilt.


After rebuilding (resizing) all online redo log groups, here is a snapshot of all physical files:

SQL> SELECT a.group#, a.member, b.bytes
2 FROM v$logfile a, v$log b WHERE a.group# = b.group#;

GROUP# MEMBER BYTES
---------- ---------------------------------------- ------------
1 /u03/app/oradata/ORA920/redo_g01a.log 262,144,000
1 /u04/app/oradata/ORA920/redo_g01b.log 262,144,000
1 /u05/app/oradata/ORA920/redo_g01c.log 262,144,000
2 /u03/app/oradata/ORA920/redo_g02a.log 262,144,000
2 /u04/app/oradata/ORA920/redo_g02b.log 262,144,000
2 /u05/app/oradata/ORA920/redo_g02c.log 262,144,000
3 /u03/app/oradata/ORA920/redo_g03a.log 262,144,000
3 /u04/app/oradata/ORA920/redo_g03b.log 262,144,000
3 /u05/app/oradata/ORA920/redo_g03c.log 262,144,000

9 rows selected.

Monday, July 27, 2009

Tuesday, March 24, 2009

FINDING DEADLOCKS INFORMATION

col username form A15
col sid form 9990
col type form A4
col lmode form 990
col request form 990
col id1 form 9999990
col id2 form 9999990
break on id1 skip 1 dup
spool check_lock.lis
SELECT sn.username, m.sid, m.type, DECODE(m.lmode, 0, 'None' , 1, 'Null' , 2, 'Row Share' , 3, 'Row Excl.' , 4, 'Share' , 5, 'S/Row Excl.' , 6, 'Exclusive' , lmode, ltrim(to_char(lmode,'990'))) lmode, DECODE(m.request, 0, 'None' , 1, 'Null' , 2, 'Row Share' , 3, 'Row Excl.' , 4, 'Share' , 5, 'S/Row Excl.' , 6, 'Exclusive' , request, ltrim(to_char(request,'990'))) request, m.id1,m.id2FROM v$session sn, V$lock mWHERE (sn.sid = m.sid AND m.request != 0) OR (sn.sid = m.sid AND m.request = 0 AND lmode != 4 AND (id1, id2 ) IN (SELECT s.id1, s.id2 FROM v$lock s WHERE request != 0 AND s.id1 = m.id1 AND s.id2 = m.id2 ) )ORDER BY id1,id2, m.request;
spool off
clear breaks

WRITING TO A TEXT FILE FROM PL/SQL

create or replace procedure Cursos_Loading(p_nothing IN integer) is
v_filehandle utl_file.file_type;
begin v_filehandle := utl_file.fopen('jagat', 'ABC.TXT', 'w', 4000);
for r in 1..10 loop
utl_file.put_line(v_filehandle, 'LINE # : 'r);
end loop;
utl_file.fclose(v_filehandle);
end Cursos_Loading;
/
EXEC Cursos_Loading(10);

Dynamic Script for Kill Inactive Sessions

select 'ALTER SYSTEM KILL SESSION','''' SID ',' SERial# ''''' ;' from V$SESSION where USERNAME NOT IN ('SYS','SYSTEM')and TRUNC(LOGON_TIME)<='&DT' AND STATUS='INACTIVE';
----FOR LOCKING SESSIONS
select 'ALTER SYSTEM KILL SESSION','''' SID ',' SERial# ''''' ;'from V$SESSION where sid in(select session_id from DBA_DDL_LOCKS where owner not in ('SYS','SYSTEM'))
/

Generate SQL*Loader Control File for Any Table

set verify off
set heading off
set embedded on
set recsep off
set pages 500
set lines 80
set feedback off
define &&1 = ''
spool c:\your_path_name\&&2..ctl
select'load datainfile ''' 'C:\your_path_name\' upper('&2') '.CSV''' 'into table ' upper('&1') '.' upper('&2') 'replacefields terminated by '',''' 'trailing nullcols'from all_tables where owner = upper('devcpos') and table_name = upper('itemmaster')
/
select decode(a.column_id, '1', '(') a.column_name (case when a.column_id < max(b.column_id) then ',' else ')' END)from all_tab_columns a, all_tab_columns bwhere a.owner = upper('&1') and a.table_name = upper('&2') andb.owner = upper('&1') and b.table_name = upper('&2')group by a.column_name, a.column_idorder by a.column_id
/
spool off
set verify on
set heading on
set embedded off
set recsep wrapped
set pages 64
set lines 80
set feedback 6
undefine 1
undefine 2

EXPDP Invoking from PL/SQL

One can invoke datapump from PL/SQL -- this might be handy for scheduling a daily or weekly export with DBMS_SCHEDULER.
DECLARE hand NUMBER;
BEGIN hand := Dbms_DataPump.Open(operation => 'EXPORT',
job_mode => 'FULL', job_name 'FULLEXPJOB',
version => 'COMPATIBLE');
Dbms_DataPump.Add_File(handle => hand, filename => 'expdp_plsql.log', directory => 'JAGAT', filetype => 3);
Dbms_DataPump.Add_File(handle => hand, filename => 'expdp_plsql.dmp', directory => 'DMPDIR', filetype => 1);
END;
/

EXPDP/IMPDP

CREATE DIRECTORY DIR AS '/opt/oracle';
GRANT read, write ON DIRECTORY dmpdir TO scott;
C:\EXPDP DEVCPOS/DEVCPOS DIRECTORY=DIR DUMPFILE=DEVCPOS.DMP
C:\IMPDP SYSTEM/SYSTEM DIRECTORY=DIR DUMPFILE=DEVCPOS.DMP
C:\IMPDP QACPOS/QACPOS DIRECTORY=DIR
C:\expdp scott/tiger schemas=SCOTT directory=SCOTT_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
C:\IMPDP TEST/TEST@ORCL DIRECTORY=TSTDIR DUMPFILE=BACKUPSCOTT.DMP SCHEMAS=SCOTT REMAP_SCHEMA=SCOTT:TEST

oradim

1: oradim -new -sid PMHIST -startmode auto -PFILE D:\oracle\product\10.2.0\admin\PMHIST\pfile\initPMHIST.ora
2: oradim -new -sid PMOBILE -startmode auto -PFILE D:\oracle\product\10.2.0\admin\PMOBILE\pfile\initPMOBILE.ora
3: oradim -new -sid JAGAT -intpwd password -startmode a -pfile C:\oraclexe\app\oracle\admin\Jagat\pfile\initJAGATemp.ora

RMAN: Cloning

Let's assume a source database named SRCDB and the target database, named GEMINI. A unix like environment is assumed, but it can be implemented on windows as well, just beware of the particular Oracle implementation on a windows platform (orapwd file name, service creation, path format)
1. Create a password file for the Cloned (GEMINI) instance:
orapwd file=/u01/app/oracle/product/9.2.0.1.0/dbs/orapwGEMINI password=password entries=10
2. Configure tnsnames.ora and listner.ora
Properly identify the database at the tnsnames.ora and have the instance manually registered against the listener.ora files, both files located at the $ORACLE_HOME/network/admin directory.
2.a Manually register the database against the listener
(listener.ora)(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/9.2.0.1.0)
(SID_NAME = GEMINI))

2.b Added the target GEMINI to the tnsnames.
oraGEMINI = (DESCRIPTION =
(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = myhost.mydomain.com)(PORT = 1521)) )
(CONNECT_DATA =
(ORACLE_SID = GEMINI) ) )

2.c Reload the listener
lsnrctl reload
3. Create a new init.ora for the cloned database.
Next create an init.ora file for the cloned database. In case the same paths cannot be used on the target host, either because it is the same source host or because those paths are not reproducible on the target, then DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT may be required to be defined
DB_NAME=GEMINICONTROL_FILES=(/u02/oradata/GEMINI/control01.ctl, /u02/oradata/GEMINI/control02.ctl, /u02/oradata/GEMINI/control03.ctl)# Convert file names to allow for different directory structure.DB_FILE_NAME_CONVERT=(/u02/oradata/SRCDB/,/u02/oradata/GEMINI/)LOG_FILE_NAME_CONVERT=(/u01/oradata/SRCDB/,/u01/oradata/GEMINI/)# block_size and compatible parameters must match those of the source databaseDB_BLOCK_SIZE=8192COMPATIBLE=9.2.0.0.0

4. Connect to the cloned instance
ORACLE_SID=GEMINI;
export ORACLE_SIDsqlplus /nologconn / as sysdba
5. Create an SPFILE based on the init.ora
CREATE SPFILE FROM PFILE='/u01/app/oracle/admin/GEMINI/pfile/init.ora';
6. Start the database in NOMOUNT mode:
STARTUP FORCE NOMOUNT;
7. Connect to the TARGET, CATALOG and AUXILIARY databases.
By means of the rman three connections are open, one for the Source Database (SOURCEDB), another for the Catalog database (RCAT), and one more for the cloned database (GEMINI)ORACLE_SID=GEMINI; export ORACLE_SID
rman TARGET sys/password@SRCDB CATALOG rman/rman@RCAT AUXILIARY /

8. Complete or Incomplete clone (recover)
From the rman the database using one of the following commands:8.a Clone the database by means of a complete recover.
DUPLICATE TARGET DATABASE TO GEMINI;

8.b Clone the database up to a defined point in time in the past by means of an incomplete
recoverDUPLICATE TARGET DATABASE TO GEMINI UNTIL TIME 'SYSDATE-2';

9. Process finished.
Once the process is finished, the newly created GEMINI database is ready to be used as an independent new cloned database.

Steps for creating External Tables

1: Create Directory
CREATE DIRECTORY JAGAT_DIR AS 'C:\BALA';
GRANT READ ON DIRECTORY JAGAT_DIR TO PUBLIC;
GRANT READ ON DIRECTORY JAGAT_DIR TO PUBLIC;

2: Create a text file.--in demo.txt--- in JAGAT_DIR
7369,SMITH,CLERK,20
7499,ALLEN,SALESMAN,30
7521,WARD,SALESMAN,30
7566,JONES,MANAGER,20
7654,MARTIN,SALESMAN,30

Create Table ScriptCREATE TABLE ext_tab (empno CHAR(4),ename CHAR(20),
job CHAR(20),deptno CHAR(2))
ORGANIZATION EXTERNAL
( TYPE oracle_loader DEFAULT DIRECTORY JAGAT_DIR ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE BADFILE 'bad_%a_%p.bad' LOGFILE 'log_%a_%p.log' FIELDS TERMINATED BY ',' MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS (empno, ename, job, deptno)) LOCATION ('demo.txt') )PARALLELREJECT LIMIT 0NOMONITORING;

Trigger for grant

Below trigger will not allow to Public level grants

CREATE OR REPLACE TRIGGER ddl_trig
BEFORE GRANT ON DATABASE
DECLARE g_list dbms_standard.ora_name_list_t;
n PLS_INTEGER;
BEGIN
n := ora_grantee(g_list);
FOR i IN 1..n LOOP
IF g_list(i) = 'PUBLIC' THEN
RAISE_APPLICATION_ERROR(-20997,'Public Grants Not Allowed');
END IF;
END LOOP;
END;
/

Creating Inter Schema Foreign Key

1: Needs REFERENCE grant on Particular Tables
GRANT REFERENCES (employee_id),UPDATE (employee_id, salary, commission_pct)ON hr.employeesTO JAGAT;

2: Create table with foreign key
CREATE TABLE dependent(dependno NUMBER,dependname VARCHAR2(10),employee NUMBERCONSTRAINT in_emp REFERENCES SCOTT.employees(employee_id) );

Extracting Metadata

For Metadata of all tables from particular schema-
select dbms_metadatA.get_ddl('TABLE',U.TABLE_NAME) FROM USER_TABLES U;

For Metadata of single table from particular schema-
select dbms_metadata.get_ddl('TABLE','XYZ','JAGAT') from dual; -> where XYZ is table name and JAGAT is user.

For Metadata of all indexes from particular schema-
select dbms_metadatA.get_ddl('INDEX',U.INDEX_NAME) FROM USER_INDEXES U;

For Metadata of single index from particular schema-
select dbms_metadata.get_ddl('INDEX','XYZ','JAGAT') from dual; -> where XYZ is index name and JAGAT is user.

Getting User Script:
select 'create user ' U.username ' identified ' DECODE(password, NULL, 'EXTERNALLY', ' by values ' '''' password '''' ) chr(10) 'default tablespace ' default_tablespace chr(10) 'temporary tablespace ' temporary_Tablespace chr(10) ' profile ' profile chr(10) 'quota ' decode ( Q.max_bytes, -1, 'UNLIMITED', NULL, 'UNLIMITED', Q.max_bytes) ' on ' default_tablespace decode (account_status,'LOCKED', ' account lock', 'EXPIRED', ' password expire', 'EXPIRED \& LOCKED', ' account lock password expire', null)';'from dba_users U, dba_ts_quotas Q-- Comment this clause out to include system & default userswhere U.username not in ('SYS','SYSTEM','SCOTT','DBSNMP','OUTLN','WKPROXY','WMSYS','ORDSYS','ORDPLUGINS','MDSYS','CTXSYS','XDB','ANONYMOUS','OWNER','WKSYS','ODM_MTR','ODM','OLAPSYS','HR','OE','PM','SH','QS_ADM','QS','QS_WS','QS_ES','QS_OS','QS_CBADM','QS_CB','QS_CS','PERFSTAT')and U.username=Q.username(+) and U.default_tablespace=Q.tablespace_name(+)

Getting Tablespace Script:

select 'create tablespace ' df.tablespace_name chr(10) ' datafile ''' df.file_name ''' size ' df.bytes decode(autoextensible,'N',null, chr(10) ' autoextend on maxsize ' maxbytes) chr(10) 'default storage ( initial ' initial_extent decode (next_extent, null, null, ' next ' next_extent ) ' minextents ' min_extents ' maxextents ' decode(max_extents,'2147483645','unlimited',max_extents) ') ;' from dba_data_files df, dba_tablespaces t where df.tablespace_name=t.tablespace_name

File copy from PL/SQL

BEGIN
dbms_file_transfer.copy_file(source_directory_object => 'DIR',source_file_name => 'SAMPLE.TXT',
destination_directory_object => 'BALA',
destination_file_name => 'JAGAT.TXT');
END;

Manualy Creating Physical Standby Database

Manualy Creating Physical Standby Database


On the Primary Database Side:
1. Enable forced logging on your primary database:

SQL> ALTER DATABASE FORCE LOGGING;
2. Create a password file if it doesn’t exist.

1) To check if a password file already exists, run the following command:
SQL> select * from v$pwfile_users;
2) If it doesn’t exist, use the following command to create one:
- On Windows:
$cd \database
1
$orapwd file=pwdPRIM.ora password=xxxxxxxx force=y
(Note: Replace with the actual Oracle home path, and xxxxxxxxx with the password for the SYS user.)
- On UNIX:
$Cd $ORACLE_HOME/dbs
$Orapwd file=pwdPRIM.ora password=xxxxxxxx force=y
(Note: Replace xxxxxxxxx with your actual password for the SYS user.)
3. Configure a Standby Redo log.

1) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:
SQL> select bytes from v$log;
BYTES
----------
52428800
52428800
52428800
2) Use the following command to determine your current log file groups:
SQL> select group#, member from v$logfile;
3) Create standby Redo log groups.
My primary database had 3 log file groups originally and I created 3 standby redo log groups using the following commands:
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log;
4. Enable Archiving on Primary.

If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;
5. Set Primary Database Initialization Parameters

Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.
1) Create pfile from spfile for the primary database:
2
- On Windows:
SQL>create pfile=’\database\pfilePRIM.ora’ from spfile;
(Note- specify your Oracle home path to replace ‘’).
- On UNIX:
SQL>create pfile=’/dbs/pfilePRIM.ora’ from spfile;
(Note- specify your Oracle home path to replace ‘’).
2) Edit this pfile to add the new primary and standby role parameters: (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)
db_name=PRIM
db_unique_name=PRIM
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
LOG_ARCHIVE_DEST_1=
'LOCATION=F:\Oracle\flash_recovery_area\PRIM\ARCHIVELOG
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=PRIM'
LOG_ARCHIVE_DEST_2=
'SERVICE=STAN LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STAN'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
remote_login_passwordfile='EXCLUSIVE'
FAL_SERVER=STAN
FAL_CLIENT=PRIM
STANDBY_FILE_MANAGEMENT=AUTO
DB_FILE_NAME_CONVERT='E:\oracle\product\10.2.0\oradata\STAN\DATAFILE','E:\oracle\product\10.2.0\oradata\PRIM\DATAFILE'
LOG_FLE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\STAN\ONLINELOG’,’E:\oracle\product\10.2.0\oradata\PRIM\ONLINELOG’,’F:\Oracle\flash_recovery_area\STAN\ONLINELOG’,’F:\Oracle\flash_recovery_area\PRIM\ONLINELOG’
(Note: For DB_FILE_NAME_CONVERT -Specify the location of the standby DB datafiles followed by the primary location;
For LOG_FLE_NAME_CONVERT - Specify the location of the standby DB online redo log files followed by the primary location.)
6. Create spfile from pfile, and restart primary database using the new spfile.

Data Guard must use SPFILE. Create the SPFILE and restart database.
- On windows:

SQL> shutdown immediate;
SQL> startup nomount pfile=’\database\pfilePRIM.ora’;
SQL>create spfile from pfile=’\database\pfilePRIM.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;
(Note- specify your Oracle home path to replace ‘’).
3

- On Unix:

SQL> shutdown immediate;
SQL> startup nomount pfile=’/dbs/pfilePRIM.ora’;
SQL>create spfile from pfile=’/dbs/pfilePRIM.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;
(Note- specify your Oracle home path to replace ‘’).
III. On the Standby Database Site:
1. Create a copy of Primary database data files on the Standby Server:

On Primary DB:
SQL>shutdown immediate;
On Standby Server (While the Primary database is shut down):
1) Create directory for data files, for example, on windows, E:\oracle\product\10.2.0\oradata\STAN\DATAFILE.
On Unix, create the directory accordingly.
2) Copy the data files and temp files over.
3) Create directory (multiplexing) for online logs, for example, on Windows, E:\oracle\product\10.2.0\oradata\STAN\ONLINELOG and F:\Oracle\flash_recovery_area\STAN\ONLINELOG.
On Unix, create the directories accordingly.
4) Copy the online logs over.
2. Create a Control File for the standby database:

On Primary DB, create a control file for the standby to use:
SQL>startup mount;
SQL>alter database create standby controlfile as ‘STAN.ctl;
SQL>ALTER DATABASE OPEN;
3. Copy the Primary DB pfile to Standby server and rename/edit the file.

1) Copy pfilePRIM.ora from Primary server to Standby server, to database folder on Windows or dbs folder on UNIX under the Oracle home path.

2) Rename it to pfileSTAN.ora, and modify the file as follows. : (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)

*.audit_file_dest='E:\oracle\product\10.2.0\admin\STAN\adump'
*.background_dump_dest='E:\oracle\product\10.2.0\admin\STAN\bdump'
*.core_dump_dest='E:\oracle\product\10.2.0\admin\STAN\cdump'
*.user_dump_dest='E:\oracle\product\10.2.0\admin\STAN\udump'
*.compatible='10.2.0.3.0'
4
control_files='E:\ORACLE\PRODUCT\10.2.0\ORADATA\STAN\CONTROLFILE\STAN.CTL','F:\ORACLE\FLASH_RECOVERY_AREA\STAN\CONTROLFILE\STAN.CTL'
db_name='PRIM'
db_unique_name=STAN
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(PRIM,STAN)’
LOG_ARCHIVE_DEST_1=
‘LOCATION=F:\Oracle\flash_recovery_area\STAN\ARCHIVELOG
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=STAN’
LOG_ARCHIVE_DEST_2=
‘SERVICE=PRIM LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=PRIM’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
FAL_SERVER=PRIM
FAL_CLIENT=STAN
remote_login_passwordfile='EXCLUSIVE'
DB_FILE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\PRIM\DATAFILE’,’E:\oracle\product\10.2.0\oradata\STAN\DATAFILE’
LOG_FILE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\PRIM\ONLINELOG’,’E:\oracle\product\10.2.0\oradata\STAN\ONLINELOG’,’F:\Oracle\flash_recovery_area\PRIM\ONLINELOG’,’F:\Oracle\flash_recovery_area\STAN\ONLINELOG’
STANDBY_FILE_MANAGEMENT=AUTO
(Note: Not all the parameter entries are listed here.
For DB_FILE_NAME_CONVERT - Specify the location of the primary DB datafiles followed by the standby location;
For LOG_FILE_NAME_CONVERT - Specify the location of the primary DB online redo log files followed by the standby location)
4. On Standby server, create all required directories for dump and archived log destination:

Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.
5. Copy the standby control file ‘STAN.ctl’ from primary to standby destinations ;

6. Copy the Primary password file to standby and rename it to pwdSTAN.ora.

On Windows copy it to \database folder, and on UNIX copy it to /dbs directory. And then rename the password file.
7. For Windows, create a Windows-based services (optional):

$oradim –NEW –SID STAN –STARTMODE manual
8. Configure listeners for the primary and standby databases.

1) On Primarysystem: use Oracle Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
5
$lsnrctl stop
$lsnrctl start
2) On Standby server: use Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop
$lsnrctl start
9. Create Oracle Net service names.

1) On Primary system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
2) On Standby system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
10. On Standby server, setup the environment variables to point to the Standby database.

Set up ORACLE_HOME and ORACLE_SID.
11. Start up nomount the standby database and generate a spfile.

- On Windows:
SQL>startup nomount pfile=’\database\pfileSTAN.ora’;
SQL>create spfile from pfile=’\database\pfileSTAN.ora’;
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;
(Note- specify your Oracle home path to replace ‘’).
- On Unix:
SQL>startup nomount pfile=’/dbs/pfileSTAN.ora’;
SQL>create spfile from pfile=’/dbs/pfileSTAN.ora’;
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;
(Note- specify your Oracle home path to replace ‘’).
12. Start Redo apply

1) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;
If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;
13. Verify the standby database is performing properly:

6
1) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;
2) On Primary, force a logfile switch:
SQL>alter system switch logfile;
3) On Standby, verify the archived redo log files were applied:
SQL>select sequence#, applied from v$archived_log order by sequence#;
14. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.

To start real-time apply:
SQL> alter database recover managed standby database using current logfile disconnect;
15. To create multiple standby databases, repeat this procedure.

IV. Maintenance:
1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment.
2. Cleanup the archive logs on Primary and Standby servers.
I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary.
For the standby database, I run RMAN to backup and delete the archive logs once per week.
$rman target /@STAN;
Rman>backup archivelog all delete input;
3. Password management
The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won’t be shipped to the standby server.
Refer to section II.2, step 2 to update/recreate password file for the Standby database.
Reference:Oracle Data Guard Concepts and Administration 10g Release 2 (10.2)