Wednesday, November 1, 2023
Here are the steps to configure an Azure DevOps (ADO) Pipeline:
1. Sign in to your Azure DevOps account and navigate to your project.
2. Go to the Pipelines section and click on "New pipeline" or "Create pipeline" (depending on the version of Azure DevOps you are using).
3. Select your source code repository where your application code is stored, such as GitHub, Azure Repos, Bitbucket, etc.
4. Choose a template for your pipeline. Templates provide a starting point for configuring a pipeline based on common scenarios like building a .NET application, deploying to Azure, etc.
5. Customize the pipeline configuration according to your requirements. This includes specifying the branch or branches you want to build from, defining triggers for automatic builds (such as when changes are pushed to specific branches), and choosing the type of build agent you want to use.
6. Configure the build steps/tasks by adding tasks to your pipeline stages. These tasks can include actions like restoring dependencies with package managers, compiling code, running tests, and creating artifacts.
7. Set up any additional settings or variables required for your build process. For example, you may need environment-specific configurations or authentication keys that need to be securely stored in Azure Key Vault.
8. Define deployment stages if needed. This involves specifying where and how you want to deploy your application once it is built successfully.
9. Configure continuous integration (CI) and continuous deployment (CD) options if desired. CI enables automatic triggering of builds whenever changes are made in the source code repository while CD automates deployments once a successful build is completed.
10 Review and save your pipeline configuration settings.
11 Run a test build of your newly created pipeline by selecting "Run" or "Queue" depending on the version of Azure DevOps you are using.
12 Monitor logs during the build process and resolve any issues that arise if necessary.
13 Once everything is working as expected, your pipeline is ready for continuous integration and deployment.
These steps outline the general process of configuring an Azure DevOps Pipeline. The specific details and options may vary depending on your project requirements and the tools you are using.
Thursday, July 30, 2009
Configuring OMF in Oracle 10g
To use OMF you must set the db_create_file_dest parameter. One set, tablespace creation becomes super-easy:
SQL> alter system set db_create_file_dest=’c:\oracle\oradata\diogenes\’;
System altered.
SQL> create tablespace test;
Tablespace created.
Now, let’s look at the result of this command by querying the dba_tablespaces view.
FILE_NAME TABLESPACE BYTES
-------------------------------------------------- --------------- ------------
C:\ORACLE\ORADATA\DIOGENES\ORA_TEST_YNJ2K200.DBF TEST 104,857,600
Here we see that OMF created the file as follows:
* File location – c:\oracle\oradata\diogenes
* File name – ora_test_ynj2k200.dbf
* File size – 100 megabytes
The 100 megabyte file size cannot be overridden at the command line, and you can only specify the file size if you also specify the file name and location in the datafile clause.
We also see that Oracle has enhanced the Oracle 10g alert log to display messages about our operation. To see the alert log, we must go to our background_dump_destination directory
SQL> show parameter background_dump
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
background_dump_dest string C:\oracle\admin\diogenes\bdump
Now that we know the location of the alert log, we go to that directory, issue a “dir” command, and we can see the alert log file, named diogenesALRT.LOG (Figure 1)
Figure 1 – Locating your Oracle alert log
Now, when we check the last few lines of the alert log we see that Oracle 10g has logged our OMF operations:
Wed Jul 31 12:02:30 2002
ALTER SYSTEM SET db_create_file_dest='c:\oracle\oradata\diogenes' SCOPE=BOTH;
Wed Jul 31 12:02:42 2002
create tablespace test
Wed Jul 31 12:02:47 2002
Created Oracle managed file C:\ORACLE\ORADATA\DIOGENES\ORA_TEST_YNJ2K200.DBF
Completed: create tablespace test
Wed Jul 31 12:08:26 2002
drop tablespace test
Wed Jul 31 12:08:26 2002
Deleted Oracle managed file C:\ORACLE\ORADATA\DIOGENES\ORA_TEST_YNJ2K200.DBF
Completed: drop tablespace test
Using OMF with online redo logs
Oracle 10g has also introduced the ability to use OMF with online redo log files. This feature is especially useful and removes the tedium from multiplexing and sizing the redo logs. This is done by setting the db_create_online_log_dest_1 through db_create_online_log_dest_5 parameters. The one through five notation allows you to specify up to five multiplexed copies of the online redo log file.
Because the redo logs are allocated at Oracle database creation time, these parameters must be set in the init.ora file prior to creating the database. When multiplexing, we also want to segregate the online redo logs onto separate disks as protection against disk failure. In this UNIX example, the mount points u01, u02, u03 and u04 all map to different disk spindles.
Sample init.ora file
db_create_online_log_dest_1 = ‘/u01/oracle/oradata/diogenes’
db_create_online_log_dest_2 = ‘/u02/oracle/oradata/diogenes’
db_create_online_log_dest_3 = ‘/u03/oracle/oradata/diogenes’
db_create_online_log_dest_4 = ‘/u04/oracle/oradata/diogenes’
Now Oracle database creation syntax is greatly simplified. Before, we had to specify the size and location of the redo logs at Oracle database creation time:
create database
"diogenes"
maxinstances 1
maxlogfiles 16
maxloghistory 226
maxlogmembers 2
maxdatafiles 30
noarchivelog
character set "US7ASCII"
SET TIME_ZONE = 'PST';
datafile
'c:\oracle\oradata\system01.dbf' size 246M
logfile
group 1 'c:\oracle\oradata\log01.dbf' size 50K,
group 2 'c:\oracle\oradata\log02.dbf' size 50K,
group 3 'c:\oracle\oradata\log03.dbf' size 50K
;
Now, OMF takes care of the details:
create database
"diogenes"
maxinstances 1
maxlogfiles 16
maxloghistory 226
maxlogmembers 2
maxdatafiles 30
noarchivelog
character set "US7ASCII"
SET TIME_ZONE = 'PST';
datafile
'c:\oracle\oradata\system01.dbf' size 246M
logfile
group 1,
group 2,
group 3
;
Conclusions about OMF
In sum, OMF is great for vendor-based application whereby scripts can be sent to all Oracle customers without regard for their specific file configuration. The downside to OMF is that seasoned Oracle database professional to not like to uniform file sizes and obtuse file names.
For more details, see the "Easy Oracle Series" a set of books especially designed by Oracle experts to get you started fast with Oracle database technology.
Configuring Logminer in Oracle for DDL & DML
LogMiner automatically builds its own internal dictionary from the source dictionary that you specify when you start LogMiner (either a flat file dictionary, a dictionary in the redo logs, or an online catalog).
If your source dictionary is a flat file dictionary or a dictionary in the redo logs, you can use the DDL_DICT_TRACKING option to direct LogMiner to track data definition language (DDL) statements. DDL tracking is disabled by default. To enable it, use the OPTIONS parameter to specify DDL_DICT_TRACKING when you start LogMiner. For example:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -
2 DBMS_LOGMNR.DDL_DICT_TRACKING);
With this option set, LogMiner applies any DDL statements seen in the redo logs to its internal dictionary. For example, to see all the DDLs executed by user SYS, you could issue the following query:
SQL> SELECT USERNAME, SQL_REDO
2 FROM V$LOGMNR_CONTENTS
3 WHERE USERNAME = 'SYS' AND OEPRATION = 'DDL';
The information returned might be similar to the following, although the actual information and how it is displayed will be different on your screen.
USERNAME SQL_REDO
SYS ALTER TABLE SCOTT.ADDRESS ADD CODE NUMBER;
SYS CREATE USER KATHY IDENTIFIED BY VALUES 'E4C8B920449B4C32' DEFAULT
TABLESPACE TS1;
Keep the following in mind when you use the DDL_DICT_TRACKING option:
* The DDL_DICT_TRACKING option is not valid with the DICT_FROM_ONLINE_CATALOG option.
* The DDL_DICT_TRACKING option requires that the database be open.
The ability to track DDL statements helps you monitor schema evolution because SQL statements used to change the logical structure of a table (because of DDL operations such as adding or dropping of columns) can be reconstructed. In addition, data manipulation language (DML) operations performed on new tables created after the dictionary was extracted can also be shown.
Only Committed Transactions
When you use the COMMITTED_DATA_ONLY option, only rows belonging to committed transactions are shown in the V$LOGMNR_CONTENTS view. This enables you to filter out rolled back transactions, transactions that are in progress, and internal operations.
To enable this option, you specify it when you start LogMiner, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -
2 DBMS_LOGMNR.COMMITTED_DATA_ONLY);
When you specify the COMMITTED_DATA_ONLY option, LogMiner groups together all DML operations that belong to the same transaction. Transactions are returned in the order in which they were committed.
If long-running transactions are present in the redo logs being analyzed, use of this option may cause an "Out of Memory" error.
The default is for LogMiner to show rows corresponding to all transactions and to return them in the order in which they are encountered in the redo logs.
For example, suppose you start LogMiner without specifying COMMITTED_DATA_ONLY and you execute the following query:
SQL> SELECT (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
2 USERNAME AS USER,
3 SQL_REDO AS SQL_REDO
4 FROM V$LOGMNR_CONTENTS;
The output would be as follows. Both committed and uncommitted transactions are returned and rows from different transactions are interwoven.
XID USER SQL_REDO
1.5.123 SCOTT SET TRANSACTION READ WRITE;
1.5.123 SCOTT INSERT INTO "SCOTT"."EMP"("EMPNO","ENAME")
VALUES (8782, 'Frost');
1.6.124 KATHY SET TRANSACTION READ WRITE;
1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (8839, 'Cummings', '415-321-1234');
1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (7934, 'Yeats', '033-334-1234');
1.5.123 SCOTT INSERT INTO "SCOTT"."EMP" ("EMPNO","ENAME")
VALUES (8566, 'Browning');
1.6.124 KATHY COMMIT;
1.7.234 GOUTAM SET TRANSACTION READ WRITE;
1.5.123 SCOTT COMMIT;
1.7.234 GOUTAM INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (8499, 'Emerson', '202-334-1234');
Wednesday, July 29, 2009
Quick Switchover with Physical Standby Database
Once the standby database is setup using Data Guard and works properly, you may want to test switchover, or perform switchover to reduce primary database downtime during OS upgrades or hardware upgrades.
A switchover allows the primary database to switch roles with its standby database. There is no data loss during a switchover. You can switch back to the original Primary database later by performing another switchover.
In case of primary database failure, you will need to perform failover to transition the standby database to the primary role. After a failover, the original primary database can no longer participate in the Data Guard configuration. So if the original Primary database is still accessible, you should always consider a switchover first.
This document only talks about switchover involving physical standby database. In this example, the original primary data is called PRIM and the original standby database is called STAN.
1. As I always recommend, test the Switchover first on your testing systems before working on Production.
2. Verify the primary database instance is open and the standby database instance is mounted.
3. Verify there are no active users connected to the databases.
4. Make sure the last redo data transmitted from the Primary database was applied on the standby database. Issue the following commands on Primary database and Standby database to find out:
SQL>select sequence#, applied from v$archvied_log;
Perform SWITCH LOGFILE if necessary.
In order to apply redo data to the standby database as soon as it is received, use Real-time apply.
II. Quick Switchover Steps
1. Initiate the switchover on the primary database PRIM:
SQL>connect /@PRIM as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
2. After step 1 finishes, Switch the original physical standby db STAN to primary role;
Open another prompt and connect to SQLPLUS:
SQL>connect /@STAN as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
3. Immediately after issuing command in step 2, shut down and restart the former primary instance PRIM:
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP MOUNT;
4. After step 3 completes:
- If you are using Oracle Database 10g release 1, you will have to Shut down and restart the new primary database STAN.
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP;
- If you are using Oracle Database 10g release 2, you can open the new Primary database STAN:
SQL>ALTER DATABASE OPEN;
STAN is now transitioned to the primary database role.
5. On the new primary database STAN, perform a SWITCH LOGFILE to start sending redo data to the standby database PRIM.
SQL>ALTER SYSTEM SWITCH LOGFILE;
1. Oracle Data Guard Concepts and Administration 10g Release 2 (10.2), B14239-04;
2. Switchover and Failover Best Practices: Oracle Data Guard 10g release 2 by Oracle
Resizing / Recreating Online Redo Log Files
Before looking at the tasks involved to perform the resize, let's look at the current online redo log groups and their sizes:
SQL> SELECT a.group#, a.member, b.bytes
2 FROM v$logfile a, v$log b WHERE a.group# = b.group#;
GROUP# MEMBER BYTES
---------- ---------------------------------------- ------------
1 /u03/app/oradata/ORA920/redo_g01a.log 104,857,600
1 /u04/app/oradata/ORA920/redo_g01b.log 104,857,600
1 /u05/app/oradata/ORA920/redo_g01c.log 104,857,600
2 /u03/app/oradata/ORA920/redo_g02a.log 104,857,600
2 /u04/app/oradata/ORA920/redo_g02b.log 104,857,600
2 /u05/app/oradata/ORA920/redo_g02c.log 104,857,600
3 /u03/app/oradata/ORA920/redo_g03a.log 104,857,600
3 /u04/app/oradata/ORA920/redo_g03b.log 104,857,600
3 /u05/app/oradata/ORA920/redo_g03c.log 104,857,600
9 rows selected.
Now let's take a look at the steps involved to resize / recreate all online redo log groups:
- Make the last redo log CURRENT
Force a log switch until the last redo log is marked "CURRENT" by issuing the following command:
SQL> select group#, status from v$log;
GROUP# STATUS
---------- ----------------
1 CURRENT
2 INACTIVE
3 INACTIVE
SQL> alter system switch logfile;
SQL> alter system switch logfile;
SQL> select group#, status from v$log;
GROUP# STATUS
---------- ----------------
1 INACTIVE
2 INACTIVE
3 CURRENT - Drop first redo log
After making the last online redo log file the CURRENT one, drop the first online redo log:
SQL> alter database drop logfile group 1;
Database altered. - Re-create dropped online redo log group
Re-create the dropped redo log group with different size (if desired):
SQL> alter database add logfile group 1 (
2 '/u03/app/oradata/ORA920/redo_g01a.log',
3 '/u04/app/oradata/ORA920/redo_g01b.log',
4 '/u05/app/oradata/ORA920/redo_g01c.log') size 250m reuse;
Database altered. - Force another log switch
After re-creating the online redo log group, force a log switch. The online redo log group just created should become the "CURRENT" one:
SQL> select group#, status from v$log;
GROUP# STATUS
---------- ----------------
1 UNUSED
2 INACTIVE
3 CURRENT
SQL> alter system switch logfile;
SQL> select group#, status from v$log;
GROUP# STATUS
---------- ----------------
1 CURRENT
2 INACTIVE
3 ACTIVE - Loop back to Step 2 until all logs are rebuilt
After re-creating an online redo log group, continue to re-create (or resize) all online redo log groups until all of them are rebuilt.
After rebuilding (resizing) all online redo log groups, here is a snapshot of all physical files:
SQL> SELECT a.group#, a.member, b.bytes
2 FROM v$logfile a, v$log b WHERE a.group# = b.group#;
GROUP# MEMBER BYTES
---------- ---------------------------------------- ------------
1 /u03/app/oradata/ORA920/redo_g01a.log 262,144,000
1 /u04/app/oradata/ORA920/redo_g01b.log 262,144,000
1 /u05/app/oradata/ORA920/redo_g01c.log 262,144,000
2 /u03/app/oradata/ORA920/redo_g02a.log 262,144,000
2 /u04/app/oradata/ORA920/redo_g02b.log 262,144,000
2 /u05/app/oradata/ORA920/redo_g02c.log 262,144,000
3 /u03/app/oradata/ORA920/redo_g03a.log 262,144,000
3 /u04/app/oradata/ORA920/redo_g03b.log 262,144,000
3 /u05/app/oradata/ORA920/redo_g03c.log 262,144,000
9 rows selected.
Monday, July 27, 2009
Tuesday, March 24, 2009
FINDING DEADLOCKS INFORMATION
col sid form 9990
col type form A4
col lmode form 990
col request form 990
col id1 form 9999990
col id2 form 9999990
break on id1 skip 1 dup
spool check_lock.lis
SELECT sn.username, m.sid, m.type, DECODE(m.lmode, 0, 'None' , 1, 'Null' , 2, 'Row Share' , 3, 'Row Excl.' , 4, 'Share' , 5, 'S/Row Excl.' , 6, 'Exclusive' , lmode, ltrim(to_char(lmode,'990'))) lmode, DECODE(m.request, 0, 'None' , 1, 'Null' , 2, 'Row Share' , 3, 'Row Excl.' , 4, 'Share' , 5, 'S/Row Excl.' , 6, 'Exclusive' , request, ltrim(to_char(request,'990'))) request, m.id1,m.id2FROM v$session sn, V$lock mWHERE (sn.sid = m.sid AND m.request != 0) OR (sn.sid = m.sid AND m.request = 0 AND lmode != 4 AND (id1, id2 ) IN (SELECT s.id1, s.id2 FROM v$lock s WHERE request != 0 AND s.id1 = m.id1 AND s.id2 = m.id2 ) )ORDER BY id1,id2, m.request;
spool off
clear breaks