Quantcast
Channel: Oracle DBA – A lifelong learning experience
Viewing all 189 articles
Browse latest View live

Why do you need to resetlogs after a cold backup restore

$
0
0

I posted a routine on how to take a cold backup locally to disk and then restore it back in 2010. Last week I was asked in a comment ‘why did you have to open the database using resetlogs?’  A very good question I thought so I proceeded to backup and recover just as the blog showed and I now know why.

Because Oracle will not let you do otherwise

Let me run through the example again and I will add a bit of commentary.

The original blog entry was https://jhdba.wordpress.com/2010/03/22/recovering-from-a-cold-backup-using-rman/

The basis of the commands were

startup nomount
run
 {
 allocate channel c1 device type disk format ‘/staging/oretail/KEEP_UNTIl_15APRIL2010/backup_db_%d_S_%s_P_%p_T_%t’;
 restore controlfile from ‘/staging/oretail/KEEP_UNTIl_15APRIL2010/backup_db_MOMPRE1A_S_37_P_1_T_713884663’;
 }
alter database mount;
run
 {
 allocate channel c1 device type disk format ‘/staging/oretail/KEEP_UNTIl_15APRIL2010/backup_db_%d_S_%s_P_%p_T_%t’;
 allocate channel c2 device type disk format ‘/staging/oretail/KEEP_UNTIl_15APRIL2010/backup_db_%d_S_%s_P_%p_T_%t’;
 restore database tag=’rman_backfulcold_MOMPREPERTEST’;
 }
–YOU DO NOT NEED TO RECOVER DATABASE otherwise it rolls forward to the current time
alter database open resetlogs;

As you can see I open the database with a resetlogs and yet it is a completely clean recovery and I should be able to do a noresetlogs.

Let’s go through a similar example very quickly.

 

startup mount
run
 {
 allocate channel c1 device type disk format '/testmaster_data/KEEP_UNTIL_JOHN/backup_db_%d_S_%s_P_%p_T_%t';
 allocate channel c2 device type disk format '/testmaster_data/KEEP_UNTIL_JOHN/backup_db_%d_S_%s_P_%p_T_%t';
 backup full
 tag rman_backfulcold_TST11204
 (database include current controlfile);
 }

 

List the backups to see what we have got

RMAN> List backup summary;

903     B  F  A DISK        2015-10-15:13:18:22 1       1       NO         RMAN_BACKFULCOLD_TST11204
904     B  F  A DISK        2015-10-15:13:18:29 1       1       NO         TAG20151015T131828


RMAN> list backupset 903;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
903     Full    12.40G     DISK        00:12:49     2015-10-15:13:18:22
        BP Key: 903   Status: AVAILABLE  Compressed: NO  Tag: RMAN_BACKFULCOLD_TST11204
        Piece Name: /testmaster_data/KEEP_UNTIL_JOHN/backup_db_TST11204_S_914_P_1_T_893163933
  List of Datafiles in backup set 903
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  2       Full 10931985470038 2015-10-15:13:04:20 +DATA/tst11204/datafile/sysaux.257.851337533
  3       Full 10931985470038 2015-10-15:13:04:20 +DATA/tst11204/datafile/undotbs1.258.851337533
  4       Full 10931985470038 2015-10-15:13:04:20 +DATA/tst11204/datafile/users.259.851337533

RMAN> list backupset 904;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
904     Full    11.23M     DISK        00:00:01     2015-10-15:13:18:29
        BP Key: 904   Status: AVAILABLE  Compressed: NO  Tag: TAG20151015T131828
        Piece Name: +FRA/tst11204/autobackup/2015_10_15/s_893163860.3783.893164709
  Control File Included: Ckp SCN: 10931985470038   Ckp time: 2015-10-15:13:04:20
  SPFILE Included: Modification time: 2015-10-15:13:05:29
  SPFILE db_unique_name: TST11204
 

I now restore the controlfile from autobackup, startup mount and restore from the cold backup

RMAN> run
 {
 allocate channel c1 device type disk format '/testmaster_data/KEEP_UNTIL_JOHN/backup_db_%d_S_%s_P_%p_T_%t';
 restore controlfile from autobackup;
 }2> 3> 4> 5>
allocated channel: c1
 channel c1: SID=10 device type=DISK
Starting restore at 2015-10-15:15:19:00
recovery area destination: +FRA
 database name (or database unique name) used for search: TST11204
 channel c1: AUTOBACKUP +FRA/TST11204/AUTOBACKUP/2015_10_15/s_893170083.1419.893171427 found in the recovery area
 AUTOBACKUP search with format "%F" not attempted because DBID was not set
 channel c1: restoring control file from AUTOBACKUP +FRA/TST11204/AUTOBACKUP/2015_10_15/s_893170083.1419.893171427
 channel c1: control file restore from AUTOBACKUP complete
 output file name=+DATA/tst11204/controlfile/current.260.851337899
 Finished restore at 2015-10-15:15:19:03
 released channel: c1
RMAN> startup mount;
database is already started
 database mounted
RMAN>
 run
 {
 allocate channel c1 device type disk format '/testmaster_data/KEEP_UNTIL_JOHN/backup_db_%d_S_%s_P_%p_T_%t';
 allocate channel c2 device type disk format '/testmaster_data/KEEP_UNTIL_JOHN/backup_db_%d_S_%s_P_%p_T_%t';
 restore database;
 }
 RMAN> 2> 3> 4> 5> 6>

I now try to open the database uisng NORESETLOGS

RMAN> alter database open noresetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "resetlogs, ;"
RMAN-01008: the bad identifier was: noresetlogs
RMAN-01007: at line 1 column 21 file: standard input

RMAN>  alter database open resetlogs;

database opened

The default syntax for the ALTER DATABASE OPEN is READ, WRITE, NORESETLOGS and yet the command is not even allowed when you have recovered using a backup controlfile. You may think there was a mistype in the command above that caused it to fail but I and other DBAs typed it in several times and tried every permutation that exists. it is only when you type in RESETLOGS that the database takes the command as being valid.

I did wonder whether the autobackup was the problem so I tried a direct recovery of the controlfile from the backup piece

RMAN> restore controlfile from '+FRA/tst11204/autobackup/2015_10_15/s_893163860.3783.893164709';
RMAN> restore database ......
RMAN>
RMAN> alter database open norestlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "resetlogs, ;"
RMAN-01008: the bad identifier was: norestlogs
RMAN-01007: at line 1 column 21 file: standard input

RMAN> alter database open resetlogs;

database opened

RMAN>

Pretty much the same result  – Oracle will not allow a NORESETLOGS to happen. So why is this? From the documentation on the ALTER DATABASE OPEN command

RESETLOGS | NORESETLOGS
========================
This clause determines whether Oracle Database resets the current log sequence number to 1, archives any unarchived logs (including the current log), and discards any redo information that was not applied during recovery, ensuring that it will never be applied. Oracle Database uses NORESETLOGS automatically except in the following specific situations, which require a setting for this clause:

You must specify RESETLOGS:

After performing incomplete media recovery or media recovery using a backup
controlfile

There is a note on Metalink 1077022.1 which shows how you can get around it which I show below. To be honest I am not sure when you would need it but here it is anyway.

      RMAN> restore controlfile......
      RMAN> mount database ;
      RMAN> restore database ;
  • Copy the online redo logs to the desired location for new database.
  • Login to SQL*Plus and generate controlfile trace script ( please note that the database is mounted from rman after restoring controlfile ) :
      SQL> alter database backup controlfile to trace
            NORESETLOGS as '/tmp/ctl.sql' ;
      SQL> SHUTDOWN IMMEDIATE
  • Edit the controlfile if required. For example, to change the location of online redo logs copied.
  • Shutdown and STARTUP NOMOUNT the database and run the create controlfile script :
         SQL> STARTUP NOMOUNT
         SQL> @/tmp/ctl.sql
  •  Recover the database and open normal :
SQL> RECOVER DATABASE ;
 SQL> ALTER DATABASE OPEN ;

It is years since I ran a create controlfile script – the duplicate database option from RMAN has removed the need for that most of the time.

Another note referring to ORA-01610  – Doc ID 19007.1 also reiterates that when recovery has used a backup controlfile then a resetlogs is mandatory.

 



Startup PDB databases automatically when a container is started – good idea?

$
0
0

I posed a note on the Oracle-L Mailing list around pluggable database and why they were not opened automatically by default when the container database was opened. The post is below

I am trying to get my head around the thing about how pluggable databases react after the container database is restarted.

Pre 12.1.0.2 it was necessary to put a startup trigger in to run a ‘alter pluggable database all open;’ command to move them from mounted to open.

Now 12.1.0.2 allows you to save a state in advance using ‘alter pluggable database xxx save state’ which does seem a step forward

However why would the default not be to start all the pluggable databases (or services as they are seen) not leave them in a mounted state. Obviously Oracle have thought about this and changed the trigger method, maybe due to customer feedback but I wonder why they have not gone the whole hog and started the services automatically.

I would much prefer to have the default to be up and running rather than relying on the fact that I have saved the state previously

I did get some interesting and very helpful responses. Jared Still made a couple of good points. The first being that the opening time for all the pluggable databases might be very long if you had 300 of them. That blew my mind a little and I must admit that I had considered scenarios where you might have half a dozen maximum, not into the hundreds.

I did a little test on a virtual 2 CPU, 16Gb server, already loaded with 6 running non container databases. I created 11 pluggables (I have created a new word there) from an existing one – each one took less than 2 minutes

create pluggable database JH3PDB from JH2PDB;
create pluggable database JH4PDB from JH2PDB;
create pluggable database JH5PDB from JH2PDB;
create pluggable database JH6PDB from JH2PDB;
create pluggable database JH7PDB from JH2PDB;
create pluggable database JH8PDB from JH2PDB;
create pluggable database JH9PDB from JH2PDB;
create pluggable database JH10PDB from JH2PDB;
create pluggable database JH11PDB from JH2PDB;
create pluggable database JH12PDB from JH2PDB;
create pluggable database JH13PDB from JH2PDB;

 

They all default to the MOUNTED state so I then opened them all

SQL> alter pluggable database all open

Elapsed: 00:06:23.54

SQL> select con_id,dbid,NAME,OPEN_MODE from v$pdbs;
   CON_ID       DBID NAME                           OPEN_MODE
---------- ---------- ------------------------------ ----------
         2   99081426 PDB$SEED                      READ ONLY
         3 3520566857 JHPDB                         READ WRITE
         4 1404400467 JH1PDB                         READ WRITE
         5 1704082268 JH2PDB                         READ WRITE
         6 3352486718 JH3PDB                         READ WRITE
         7 2191215773 JH4PDB                         READ WRITE
         8 3937728224 JH5PDB                         READ WRITE
         9 731805302 JH6PDB                         READ WRITE
       10 1651785020 JH7PDB                         READ WRITE
       11 769231648 JH8PDB                         READ WRITE
       12 3682346625 JH9PDB                         READ WRITE
       13 2206923020 JH10PDB                       READ WRITE
       14 281114237 JH11PDB                       READ WRITE
       15 2251469696 JH12PDB                       READ WRITE
       16 260312931 JH13PDB                       READ WRITE

 

So 6:30 to open 11 PDBs might lead to a very long time for a very large number. That really answered the question I had asked. However there were more valuable nuggets to come.

Stefan Koehler pointed to a OTN community post where he advocated that the new (12.1.0.2) ‘save state’ for PDBs should also be extended to PDB services so that the service is started when the PDB is opened rather than having to use a custom script or a trigger. That seems a very reasonable proposal to me and will get my vote

Jared had an enhancement idea, instead of having a saved state, which I must admit is a bit messy, then why not a pdb control table with a START_MODE column?

Possible values

– NEVER:  Never open the pdb at startup

– ALWAYS: Always start this pdb.

– ON_DEMAND:  Start this pdb when someone tries to connect to it.

And then some mechanism to override this.

‘startup open cdb nomount nopdb’ for instance.

It does sound an interesting idea, especially  the ON_DEMAND option. I would have thought that if you were thinking along those lines a logical extension might be to auto unmount PDBs when they have not  been used for a while, again controllable by a table.

 


Creating standby database inc DG Broker and 12c changes

$
0
0

I thought I would refresh my knowledge of creating a standby database and at the same time include some DataGuard Broker configuration which also throws in some changes that came along with 12c

Overview

Database Name QUICKIE host server 1 ASM disk

Database Name STAN host server 2 ASM disk

Create a standby database STAN using ACTIVE DUPLICATE from the source database QUICKIE

 

QUICKIE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = QUICKIE)
)
)

STAN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = STAN)
)

server2 – listener.ora – note I have selected 1524 as that port is not currently in use and I do not want to interfere with any existing databases

 

LISTENERCLONE =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
)
)

(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = STAN)
(ORACLE_HOME = /app/oracle/product/12.1.0.2/dbhome_1)
(SID_NAME = STAN)
)
)

SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF

server2 – tnsnames.ora

STAN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = STAN)
)
)

LISTENERCLONE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
)
(CONNECT_DATA =
(SERVICE_NAME = STAN)
)

 

  1. Start clone listener on server2

lsnrctl start LISTENERCLONE

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 18-MAY-2015 09:19:27

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:

/app/oracle/product/12.1.0.2/grid/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias

Attempting to contact (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = server1)(PORT = 1524)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = STAN)))

OK (10 msec)

 

  1. Create a pfile on server2 – $ORACLE_HOME/dbs/initSTAN.ora

 

db_unique_name=STAN

compatible='12.1.0.2'

db_name='QUICKIE’

local_listener='server2:1524'

 

 

  1. Create password file for STAN (use SOURCE DB SYS password)

 

orapwd file=orapwQUICKIE password=pI7KU4ai

 

or copy the source passwd file

Create standby logs on the primary database if they do not exist already:

alter database add standby logfile thread 1 group 4 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

alter database add standby logfile thread 1 group 5 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

alter database add standby logfile thread 1 group 6 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

alter database add standby logfile thread 1 group 7 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

 

 

  1. startup database in nomount on standby server
[oracle@server2][STAN] <strong>$sysdba</strong>

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jul 22 15:09:28 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount

ORACLE instance started.

 

  1. login to RMAN console on source server via
rman target sys/pI7KU4ai auxiliary sys/pI7KU4ai@STAN<

connected to target database: QUICKIE (DBID=4212874924)

connected to auxiliary database: QUICKIE (not mounted)

 

  1. run restore script

 

run {

allocate channel ch1 type disk;

allocate channel ch2 type disk;

allocate auxiliary channel aux1 type disk;

allocate auxiliary channel aux2 type disk;

duplicate target database for standby from active database

SPFILE

set db_unique_name='STAN'

set audit_file_dest='/app/oracle/admin/STAN/adump'

set cluster_database='FALSE'

set control_files='+DATA/STAN/control01.ctl','+FRA/STAN/control02.ctl'

set db_file_name_convert='QUICKIE','STAN'

set local_listener='server2:1522'

set log_file_name_convert='QUICKIE','STAN'

set undo_tablespace='UNDOTBS1'

set audit_trail='DB'

nofilenamecheck;

}

 

If you get the error RMAN-05537: DUPLICATE without TARGET connection when auxiliary instance is started with spfile cannot use SPFILE clause then either remove the SPFILE parameter from the RMAN duplicate line above or start the STAN database with a parameter file not a spfile.

 

In 12c it seems to create a spfile after starting with an init.ora file unless you use the syntax

startup nomount pfile=’/app/oracle/product/12.1.0.2/dbhome_1/dbs/spfileSTAN.ora’

 

I also got an error around DB_UNIQUE_NAME which is new in 12c. This is because the standby existed previously (as I re-tested my instructions for this document) and it creates a HAS /CRS resource for the database name

</pre>
RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of Duplicate Db command at 07/22/2015 15:45:57

RMAN-05501: aborting duplication of target database

RMAN-03015: error occurred in stored script Memory Script

RMAN-03009: failure of sql command on clone_default channel at 07/22/2015 15:45:57

RMAN-11003: failure during parse/execution of SQL statement: alter system set db_unique_name = 'STAN' comment= '' scope=spfile

ORA-32017: failure in updating SPFILE
<pre>ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

 

The fix is to remove that resource

srvctl remove database -d STAN

 

 

  1. Set parameters back on primary and standby and restart the databases to ensure that they are picked up
alter system reset log_archive_dest_1;

alter system reset log_archive_dest_2;

Set parameter on standby

alter system set local_listener= ‘server2:1522’ scope=both;

 

 

  1. Dataguard broker configuration created at this point. Run from primary, although can be done on either

 

Sometimes it is best to stop/start the DG Broker if it has already been created – after deleting the dr files as well

alter system set dg_broker_start=FALSE;

alter system set dg_broker_start=TRUE;

dgmgrl /

create configuration 'DGConfig1' as primary database is 'QUICKIE' connect identifier is QUICKIE;

Configuration "DGConfig1" created with primary database "QUICKIE"

add database 'STAN' as connect identifier is 'STAN' maintained as physical;

Database "STAN" added

edit database 'QUICKIE' set property 'DGConnectIdentifier'='QUICKIE';

edit database 'STAN' set property 'DGConnectIdentifier'='STAN';

The next 2 commands are required if you are not using Port 1521.

Assuming you are not using Oracle Restart (which is now deprecated anyway) you also require the static entries to be defined in your listener.ora file STAN_DGMGRL and QUICKIE_DGMGRL in this case

edit database 'STAN' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server2)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=STAN_DGMGRL)(INSTANCE_NAME=STAN)(SERVER=DEDICATED)))';

edit database 'QUICKIE' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server1)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=QUICKIE_DGMGRL)(INSTANCE_NAME=QUICKIE)(SERVER=DEDICATED)))';
;

The below entries seemed to be picked up by default but it is worth checking and correcting with the commands below if necessary.

;

1edit database 'QUICKIE' set property 'StandbyFileManagement'='AUTO';

edit database 'QUICKIE' set property 'DbFileNameConvert'='STAN,QUICKIE';

edit database 'QUICKIE' set property 'LogFileNameConvert'='STAN,QUICKIE';

edit database 'STAN' set property 'StandbyFileManagement'='AUTO';

edit database 'STAN' set property 'DbFileNameConvert'='QUICKIE,STAN';

edit database 'STAN' set property 'LogFileNameConvert'='QUICKIE,STAN';

 

Now to start the broker

 

enable configuration

show configuration

show database verbose 'QUICKIE'

show database verbose 'STAN'

validate database verbose 'QUICKIE'

validate database verbose 'STAN'

 

 

Let’s try a switchover. You need to have SUCCESS as the final line of show configuration before any switchover will work. You need to be connected as SYS/password in DGMGRL, not using DGMGRL /. The latter uses OS authentication and the former is database authentication.

DGMGRL> switchover to 'STAN';

Performing switchover NOW, please wait...

Operation requires a connection to instance "STAN" on database "STAN"

Connecting to instance "STAN"...

Connected as SYSDBA.

New primary database "STAN" is opening...

Oracle Clusterware is restarting database "QUICKIE" ...

Switchover succeeded, new primary is "STAN"

&nbsp;

However when switching back the new primary QUICKIE opened but STAN hung

New primary database "QUICKIE" is opening...

Oracle Clusterware is restarting database "STAN" ...

shut down instance "STAN" of database "STAN"

start up instance "STAN" of database "STAN"

 

The database startup has hung and eventually times out. This is an issue around Oracle Restart which is now deprecated anyway

On the primary we can see a configuration for QUICKIE but there is not one on the standby for STAN

$srvctl config database -d STAN

PRCD-1120 : The resource for database STAN could not be found.

PRCR-1001 : Resource ora.stan.db does not exist

 

srvctl add database –d STAN –oraclehome ‘/app/oracle/product/12.1.0.2/dbhome_1’ –role ‘PHYSICAl_STANDBY’

 

Re-run the switchover and all should be well.

 

 

 

 

 

 

 

 

 


Migrating tablespaces across Endian platforms

$
0
0

This is a set of posts about migrating a database from one endian platform to another.

The long-term intention is to move a large (10Tb) 11.1.0.7 database on HP-UX to an OEL Linux server with minimum outage so that will include a database upgrade as well.

This first post is about migrating a self-contained set of schemas using transportable tablespace.

The HP-UX 11.1.0.7 database is called HPUXDB and it was created through dbca with the sample schemas created.. The target database is 11.2.0.4 on OEL 5.8

I already have an 11.2.0.4 installation on the OEL 5.8 server and the target database is called L18

Let’s check the endian versions

 

SELECT PLATFORM_ID, PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM;

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
4 HP-UX IA (64-bit)
13 Linux x86 64-bit&nbsp;&nbsp; Little

 

Set the tablespace EXAMPLE read only

 

 alter tablespace example read only;

Tablespace altered.

select file_name from dba_data_files;

+DATA/hpuxdb/datafile/example.352.886506805

EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list=&amp;amp;amp;amp;amp;gt;'EXAMPLE', incl_constraints =&amp;amp;amp;amp;amp;gt;TRUE);

PL/SQL procedure successfully completed.

SELECT * FROM transport_set_violations;

no rows selected

Export the data using the keywords transport_tablespaces

 

expdp directory=data_pump_dir transport_tablespaces=example dumpfile=hpux.dmp logfile=hpux.log Starting &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;SYS&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;.&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;SYS_EXPORT_TRANSPORTABLE_01&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;: /******** AS SYSDBA

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCACT_INSTANCE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCDEPOBJ
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table SYS.SYS_EXPORT_TRANSPORTABLE_01 successfully
loaded/unloaded

******************************************************************************

Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is: /app/oracle/admin/HPUXDB/dpdump/hpux.dmp

******************************************************************************

Datafiles required for transportable tablespace EXAMPLE: +DATA/hpuxdb/datafile/example.352.886506805

Job SYS_EXPORT_TRANSPORTABLE_01 successfully completed at 13:47:55

sftp the export dump file to the target server – I am using the data_pump_dir directory again.

You also need to copy the datafile(s) of the tablespaces you are migrating.

Mine is on ASM – switch to the ASM database and run asmcmd. I am copying out to /tmp and then copying it across to the target server on /tmp for now

 

ASMCMD> cp ‘+DATA/hpuxdb/datafile/example.352.886506805’ ‘/tmp/example.dbf’

copying +DATA/hpuxdb/datafile/example.352.886506805 -> /tmp/example.dbf

ASMCMD>

 

sftp> cd /tmp

sftp> put /tmp/example.dbf /tmp/example.dbf

Uploading /tmp/example.dbf to /tmp/example.dbf

/tmp/example.dbf                                                         100% 100MB 11.1MB/s 11.8MB/s   00:09

Max throughput: 13.9MB/s

 

Create users who own objects in the tablespace to be migrated. Note that their default tablespace (EXAMPLE) does not exist as yet.

create user sh identified by sh;

grant connect,resource to sh;
grant unlimited tablespace to sh;
grant create materialized view to sh;

create user oe identified by oe;
grant connect,resource to oe;
grant unlimited tablespace to oe;

create user hr identified by hr;
grant connect,resource to hr;
grant unlimited tablespace to hr;

create user ix identified by ix;
grant connect,resource to ix;
grant unlimited tablespace to ix;

create user pm identified by pm;
grant connect,resource to pm;
grant unlimited tablespace to pm;

create user bi identified by bi;
grant connect,resource to bi;
grant unlimited tablespace to bi;

 

CONVERT TABLESPACE EXAMPLE

TO PLATFORM ‘Linux x86 64-bit’

FORMAT=’/tmp/%U’;

 

$rmant

Recovery Manager: Release 11.1.0.7.0 - Production on Mon Aug 3 13:54:42 2015

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: HPUXDB (DBID=825022061)
using target database control file instead of recovery catalog

 

RMAN> CONVERT TABLESPACE EXAMPLE
TO PLATFORM 'Linux x86 64-bit'
FORMAT='/tmp/%U';

Starting conversion at source at 2015-08-03:13:56:18
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00005 name=+DATA/hpuxdb/datafile/example.352.886506805
converted datafile=/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03
Finished conversion at source at 2015-08-03:13:56:21

sftp> put /tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2 Uploading /tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2 to /tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2 impdp directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles=’/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2′

Import: Release 11.2.0.4.0 - Production on Mon Aug 3 13:24:06 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management and OLAP options
Master table SYS.SYS_IMPORT_TRANSPORTABLE_01 successfully loaded/unloaded

Starting SYS.SYS_IMPORT_TRANSPORTABLE_01 /******** AS SYSDBA directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles=/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_02qdm380

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK

Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC

Processing object type TRANSPORTABLE_EXPORT/TABLE

Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT

ORA-39083: Object type OBJECT_GRANT failed to create with error:

ORA-01917: user or role 'BI' does not exist

Failing sql is:

GRANT SELECT ON OE.CUSTOMERSTO BI;

ORA-39083: Object type OBJECT_GRANT failed to create with error:

ORA-01917: user or role 'BI' does not exist

Failing sql is:

GRANT SELECT ON OE.WAREHOUSES TO BI;

ORA-31685: Object type MATERIALIZED_VIEW:SH.CAL_MONTH_SALES_MV; failed due to insufficient privileges. Failing sql is:

CREATE MATERIALIZED VIEW SH.CAL_MONTH_SALES_MV (CALENDAR_MONTH_DESC, DOLLARS) USING (CAL_MONTH_SALES_MV(9, 'HPUXDB', 2, 0, 0, SH.TIMES, '2015-07-31 11:55:08', 8, 70899, '2015-07-31 11:55:09', '', 1, '0208', 822277, 0, NULL, 1, "SH", "SALES", '2015-07-31 11:55:08', 33032, 70841, '2015-07-31 11:55:09', '', 1, '88', 822277, 0, NULL), 1183809, 9, ('1950-01-01 12:00:00', 21,

Dropped the users in the example schema – dropped the example tablespace. Added the user and create a BI user and granted CREATE MATERIALIZED VIEW to SH.  Then re-run the import

 

impdp directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles='/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2'

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, Automatic Storage Management and OLAP options

Master table SYS SYS_IMPORT_TRANSPORTABLE_01 successfully loaded/unloaded

Starting SYS.SYS_IMPORT_TRANSPORTABLE_01/******** AS SYSDBA directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles=/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK

Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC

Processing object type TRANSPORTABLE_EXPORT/TABLE

Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type TRANSPORTABLE_EXPORT/INDEX

Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT

Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS

Processing object type TRANSPORTABLE_EXPORT/COMMENT

Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT

Processing object type TRANSPORTABLE_EXPORT/TRIGGER

Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX

Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS

Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS

Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE

Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX

Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX

Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCACT_INSTANCE

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCDEPOBJ

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

Job SYS.SYS_IMPORT_TRANSPORTABLE_01 completed at Mon Aug 3 14:04:42 2015 elapsed 0 00:00:52

 

Post Migration

alter tablespace example read write;

alter the users created earlier to have their default tablespace as example. The imported objects are in EXAMPLE but new ones will go to USERS or whichever the defult tablespace is set to.

Remove all dmp files and interim files that were created (in /tmp on both servers in my demo).

Migrating directly from ASM to ASM

For ease in the example above I migrated out from ASM to a filesystem. Here I will demonstrate a copy going from ASM to ASM.

 

create user test identified by xxxxxxxxx default tablespace example1;

User created.

grant connect, resource to test;

Grant succeeded.

grant unlimited tablespace to test;

Grant succeeded.

connect test/xxxxxxxx

Connected.

create table example1_data tablespace example1 as select * from all_objects;

Table created.

select segment_name, tablespace_name from user_segments;

SEGMENT_NAME TABLESPACE_NAME

——————————

EXAMPLE1_DATA EXAMPLE1

 

EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list = &gt; 'EXAMPLE1', incl_constraints =&amp;amp;gt; TRUE);

PL/SQL procedure successfully completed.

SELECT * FROM transport_set_violations;

no rows selected

alter tablespace4 example1 read only;

Now export the metadata

expdp …..

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK

Processing object type TRANSPORTABLE_EXPORT/TABLE

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

Master table “SYS”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded

******************************************************************************

Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:

/app/oracle/admin/HPUXDB/dpdump/hpux.dmp

******************************************************************************

Datafiles required for transportable tablespace EXAMPLE1:

+DATA/hpuxdb/datafile/example1.1542.887621739

Job SYS.SYS_EXPORT_TRANSPORTABLE_01 successfully completed at 09:48:33

 

Now move the datafile to the target ASM environment. You can copy the dmp file in the same manner if you wish

ASMCMD [+] > cp –port 1522 sys@server.+ASM:+DATA/hpuxdb/datafile/example1.1542.887621739 +DATA/linuxdb/example1

Enter password: **********

copying server:+DATA/hpuxdb/datafile/example1.1542.887621739 -> +DATA/linuxdb/example1

 

The only problem with copying from ASM to ASM is that the physical file is not located in the directory where you want to copy. It is actually stored in the +DATA/ASM/DATAFILE directory (rather than the LINUXDB directory:

 

ASMCMD [+data/linuxdb] > ls -l
Type           Redund  Striped  Time             Sys  Block_Size  Blocks     Bytes     Space  Name
                                                 Y                                            CONTROLFILE/
                                                 Y                                            DATAFILE/
                                                 Y                                            ONLINELOG/
                                                 Y                                            PARAMETERFILE/
                                                 Y                                            TEMPFILE/
DATAFILE       UNPROT  COARSE   AUG 13 13:00:00  N          8192    6401  52436992  53477376  example1 => +DATA/ASM/DATAFILE/example1.362.887637415
PARAMETERFILE  UNPROT  COARSE   AUG 12 22:00:00  N           512       5      2560   1048576  spfileLINUXDB.ora => +DATA/LINUXDB/PARAMETERFILE/spfile.331.886765295

Move it to the correct folder using the cp command within asmcmd then rm from the original folder


Using DBMS_PARALLEL_EXECUTE

$
0
0

DBMS_PARALLEL_EXECUTE

We have a number of updates to partitioned tables that are run from within pl/sql blocks which have either an execute immediate ‘alter session enable parallel dml’ or execute immediate ‘alter session force parallel dml’ in the same pl/sql block. It appears that the alter session is not having any effect as we are ending up with non-parallel plans. When the same queries are run outside pl/sql either in sqlplus or sqldeveloper sessions the updates are given a parallel plan. We have a simple test pack that we have used to prove that this anomaly takes place at 11.1.0.7 (which is the version of the affected DB) and at 12.1.0.2 (to show that it is not an issue with just that version).
It appears that the optimizer is not aware of the fact that the alter session has been performed. We have also tried performing the alter session statement outside of the pl/sql block i.e. in native sqlplus environment, that also does not result in a parallel plan

Let me show a test case

Firstly we tried anonymous pl/sql block with an execute immediate for setting force dml for the session:

create table target_table

(

c1     number(6),

c2 varchar2(1024)

)

partition by range (c1)

(

partition p1 values less than (2),

partition p2 values less than (3),

partition p3 values less than (100)

)

;

create unique index target_table_pk on target_table (c1, c2) local;

alter table target_table add constraint target_table_pk primary key (c1, c2) using index;

create table source_table

(      c1     number(6),

c2       varchar2(1024)

);

insert /*+append */ into source_table (select distinct 2, owner||object_type||object_name from dba_objects);

commit;

select count(*) from source_table;

begin

execute immediate 'alter session force parallel dml';

insert /*+append parallel */ into target_table

select * from source_table;

end;

/

 

This load generates a serial plan

-----------------------------------------------------------------------------------
| Id | Operation         | Name         | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | INSERT STATEMENT   |             |       |       |   143 (100)|         |
|   1 | LOAD AS SELECT   |             |       |       |           |         |
|   2 |   TABLE ACCESS FULL| SOURCE_TABLE | 80198 |   40M|   143   (1)| 00:00:02 |
-----------------------------------------------------------------------------------

To see what the plan should look like if parallel dml was being used in a sqlplus session:

 

truncate table target_table;

alter session force parallel dml;

INSERT /*+append sqlplus*/ INTO TARGET_TABLE SELECT * FROM SOURCE_TABLE;

—————————————————————————————————————————

| Id | Operation                   | Name         | Rows | Bytes | Cost (%CPU)| Time     |   TQ |IN-OUT| PQ Distrib |

-------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT           |             |       |       |     5 (100)|         |       |     |           |
|   1 | PX COORDINATOR             |             |       |       |           |         |       |     |           |
|   2 |   PX SEND QC (RANDOM)       | :TQ10002     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,02 | P->S | QC (RAND) |
|   3 |   INDEX MAINTENANCE       | TARGET_TABLE |       |       |           |         | Q1,02 | PCWP |           |
|   4 |     PX RECEIVE            |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,02 | PCWP |           |
|   5 |     PX SEND RANGE         | :TQ10001     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,01 | P->P | RANGE     |
|   6 |       LOAD AS SELECT       |              |       |       |           |         | Q1,01 | PCWP |           |
|   7 |       PX RECEIVE           |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,01 | PCWP |           |
|   8 |         PX SEND RANDOM LOCAL| :TQ10000     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | P->P | RANDOM LOCA|
|   9 |         PX BLOCK ITERATOR |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | PCWC |           |
|* 10 |           TABLE ACCESS FULL | SOURCE_TABLE | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | PCWP |           |

 

truncate table target_table;

 

 

Oracle offered us two pieces of advice

  • Use a PARALLEL_ENABLE clause through a function
  • Use the DBMS_PARALLEL_EXECUTE package to achieve parallelism. (this is only available 11.2 onwards)

They also referred us to BUG 12734028 – PDML ONLY FROM PL/SQL DOES NOT WORK CORRECTLY
How To Enable Parallel Query For A Function? ( Doc ID 1093773.1 )

We did try the first option, the function but that failed and we did not move forward on that, concentrating on the DBMS_PARALLEL_EXECUTE package.

So the rest of this blog is around how our testing went and what results we achieved.

Starting with the same source_table contents, and an empty target table, a task needs to be created:

 

BEGIN

DBMS_PARALLEL_EXECUTE.create_task (task_name =&amp;amp;amp;amp;gt;DBMS_PARALLEL_EXECUTE.generate_task_name);

END;

/

The task name could also be set to manually, but this method does not lend itself to being proceduralised, as the name needs to be unique.

To determine the identifier for the task:

 

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE status = 'CREATED';

 

TASK_NAME                                CHUNK_TYPE   STATUS

—————————————- ———— ——————-

TASK$_141                               UNDELARED   CREATED

 

Now the source_table data set must be split into chunks in order to set up discrete subsets of data that will be handled by the subordinate tasks. This demo will split the table by rowid, but it can also be split using block counts or using the values contained in a specific column in the table.

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   =&amp;amp;amp;amp;gt; 'TASK$_141',-

table_owner = &amp;amp;gt;'SYS',-

table_name = &amp;amp;gt; 'SOURCE_TABLE',-

by_row     = &amp;amp;gt; TRUE,-

chunk_size = &amp;amp;gt; 20000)

 

 

Note that there are three procedures that can be used to create chunks

PROCEDURE CREATE_CHUNKS_BY_NUMBER_COL

PROCEDURE CREATE_CHUNKS_BY_ROWID

PROCEDURE CREATE_CHUNKS_BY_SQL

 

As the chunk size is set to 20000 , and the table contains just over 80000 rows, one would expect 5 chunks to be created:

col CHUNK_ID form 999

col TASK_NAME form a10

col START_ROWID form a20

col end_rowid form a20

set pages 60

select CHUNK_ID, TASK_NAME, STATUS, START_ROWID, END_ROWID from user_parallel_execute_chunks where TASK_NAME = 'TASK$_27' order by 1;

can we count rows in each chunk id? Desc the table

 

CHUNK_ID TASK_NAME               STATUS               START_ROWID       END_ROWID

———- ———————— ——————– —————— ——————

142 TASK$_141               UNASSIGNED           AAA2sTAABAAAbtoAAA AAA2sTAABAAAbtvCcP

143 TASK$_141               UNASSIGNED           AAA2sTAABAAAbtwAAA AAA2sTAABAAAbt3CcP

144 TASK$_141               UNASSIGNED           AAA2sTAABAAAbt4AAA AAA2sTAABAAAbt/CcP

145 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSAAAA AAA2sTAABAAAdSHCcP

146 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSIAAA AAA2sTAABAAAdSPCcP

147 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSQAAA AAA2sTAABAAAdSXCcP

148 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSYAAA AAA2sTAABAAAdSfCcP

149 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSgAAA AAA2sTAABAAAdSnCcP

150 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSoAAA AAA2sTAABAAAdSvCcP

151 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSwAAA AAA2sTAABAAAdS3CcP

152 TASK$_141               UNASSIGNED           AAA2sTAABAAAdS4AAA AAA2sTAABAAAdS/CcP

153 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTAAAA AAA2sTAABAAAdTHCcP

154 TASK$_141               UNASSIGNED          AAA2sTAABAAAdTIAAA AAA2sTAABAAAdTPCcP

155 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTQAAA AAA2sTAABAAAdTXCcP

156 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTYAAA AAA2sTAABAAAdTfCcP

157 TASK$_141                UNASSIGNED           AAA2sTAABAAAdTgAAA AAA2sTAABAAAdTnCcP

158 TASK$_141               UNASSIGNED           AAA2sTAABAAAdUAAAA AAA2sTAABAAAdUxCcP

159 TASK$_141               UNASSIGNED           AAA2sTAABAAAdUyAAA AAA2sTAABAAAdVjCcP

160 TASK$_141               UNASSIGNED           AAA2sTAABAAAdVkAAA AAA2sTAABAAAdV/CcP

161 TASK$_141               UNASSIGNED           AAA2sTAABAAAdWAAAA AAA2sTAABAAAdWxCcP

162 TASK$_141               UNASSIGNED           AAA2sTAABAAAdWyAAA AAA2sTAABAAAdXjCcP

163 TASK$_141               UNASSIGNED           AAA2sTAABAAAdXkAAA AAA2sTAABAAAdX/CcP

164 TASK$_141               UNASSIGNED           AAA2sTAABAAAdYAAAA AAA2sTAABAAAdYxCcP

165 TASK$_141                UNASSIGNED           AAA2sTAABAAAdYyAAA AAA2sTAABAAAdZjCcP

166 TASK$_141               UNASSIGNED           AAA2sTAABAAAdZkAAA AAA2sTAABAAAdZ/CcP

167 TASK$_141               UNASSIGNED           AAA2sTAABAAAdaAAAA AAA2sTAABAAAdaxCcP

168 TASK$_141               UNASSIGNED           AAA2sTAABAAAdayAAA AAA2sTAABAAAdbjCcP

169 TASK$_141               UNASSIGNED           AAA2sTAABAAAdbkAAA AAA2sTAABAAAdb/CcP

 

28 rows selected.

Tests were run changing the chunk_size to 40000 and still 28 chunks were created.

Looking at the details of the main task in USER_PARALLEL_EXECUTE_TASKS:

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE task_name = 'TASK$_141';

TASK_NAME                               CHUNK_TYPE   STATUS

---------------------------------------- ------------ -------------------

TASK$_141                               ROWID_RANGE CHUNKED

It details that the task has been split into discrete ranges of data, and shows how these ranges were determined.

To execute the insert into the target table we create an anonymous pl/sql block, declare sql that is to be run with the addition of an additional predicate for a range of rowids:

 

The ranges are passed in from the ranges specified in the user_parallel_execute_chunks table

 

DECLARE

v_sql VARCHAR2(1024);

BEGIN

v_sql := 'insert /*+PARALLEL APPEND */ into target_table

select * from source_table

where rowid between :start_id and :end_id';

dbms_parallel_execute.run_task(task_name     = &amp;amp;gt; 'TASK$_141',

sql_stmt       = &gt; v_sql,

language_flag = &gt; DBMS_SQL.NATIVE,

parallel_level = &gt;; 5);

END;

/

PL/SQL procedure successfully completed.

Checking the contents of the target table:

select count(*) from target_table;

COUNT(*)

----------

86353

Now looking at the details of the main task in USER_PARALLEL_EXECUTE_TASKS:

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE task_name = 'TASK$_141';

TASK_NAME                               STATUS

---------------------------------------- -------------------

TASK$_141                               FINISHED

 

Looking at the user_parallel_execute_chunks entries after completion:

select CHUNK_ID, JOB_NAME, START_TS, END_TS

from user_parallel_execute_chunks

where TASK_NAME = 'TASK$_141' order by 2,3;

CHUNK_ID JOB_NAME             START_TS                       END_TS

---------- -------------------- ------------------------------ -------------------------------

142 TASK$_7410_1         30-JUL-15 02.34.31.672484 PM   30-JUL-15 02.34.31.872794 PM

144 TASK$_7410_1         30-JUL-15 02.34.31.877091 PM   30-JUL-15 02.34.32.204513 PM

146 TASK$_7410_1         30-JUL-15 02.34.32.209950 PM   30-JUL-15 02.34.32.331349 PM

148 TASK$_7410_1         30-JUL-15 02.34.32.335192 PM   30-JUL-15 02.34.32.528391 PM

150 TASK$_7410_1         30-JUL-15 02.34.32.533488 PM   30-JUL-15 02.34.32.570243 PM

152 TASK$_7410_1         30-JUL-15 02.34.32.575450 PM   30-JUL-15 02.34.32.702353 PM

154 TASK$_7410_1         30-JUL-15 02.34.32.710860 PM   30-JUL-15 02.34.32.817684 PM

156 TASK$_7410_1         30-JUL-15 02.34.32.828963 PM   30-JUL-15 02.34.32.888834 PM

158 TASK$_7410_1         30-JUL-15 02.34.32.898458 PM   30-JUL-15 02.34.33.493985 PM

160 TASK$_7410_1         30-JUL-15 02.34.33.499254 PM   30-JUL-15 02.34.33.944356 PM

162 TASK$_7410_1         30-JUL-15 02.34.33.953509 PM   30-JUL-15 02.34.34.366352 PM

164 TASK$_7410_1         30-JUL-15 02.34.34.368668 PM   30-JUL-15 02.34.34.911471 PM

166 TASK$_7410_1         30-JUL-15 02.34.34.915205 PM   30-JUL-15 02.34.35.524515 PM

168 TASK$_7410_1         30-JUL-15 02.34.35.527515 PM   30-JUL-15 02.34.35.889198 PM

169 TASK$_7410_1         30-JUL-15 02.34.35.889872 PM   30-JUL-15 02.34.35.890412 PM

143 TASK$_7410_2         30-JUL-15 02.34.31.677235 PM   30-JUL-15 02.34.32.129181 PM

145 TASK$_7410_2         30-JUL-15 02.34.32.135013 PM   30-JUL-15 02.34.32.304761 PM

147 TASK$_7410_2         30-JUL-15 02.34.32.310140 PM   30-JUL-15 02.34.32.485545 PM

149 TASK$_7410_2         30-JUL-15 02.34.32.495971 PM   30-JUL-15 02.34.32.550955 PM

151 TASK$_7410_2         30-JUL-15 02.34.32.558335 PM   30-JUL-15 02.34.32.629274 PM

153 TASK$_7410_2         30-JUL-15 02.34.32.644917 PM   30-JUL-15 02.34.32.764337 PM

155 TASK$_7410_2         30-JUL-15 02.34.32.773029 PM   30-JUL-15 02.34.32.857794 PM

157 TASK$_7410_2         30-JUL-15 02.34.32.864875 PM   30-JUL-15 02.34.32.908799 PM

159 TASK$_7410_2         30-JUL-15 02.34.32.913982 PM   30-JUL-15 02.34.33.669704 PM

161 TASK$_7410_2         30-JUL-15 02.34.33.672077 PM   30-JUL-15 02.34.34.128170 PM

163 TASK$_7410_2         30-JUL-15 02.34.34.140102 PM   30-JUL-15 02.34.34.624627 PM

165 TASK$_7410_2         30-JUL-15 02.34.34.628145 PM   30-JUL-15 02.34.35.431037 PM

167 TASK$_7410_2         30-JUL-15 02.34.35.433282 PM   30-JUL-15 02.34.35.885741 PM
28 rows selected.

 

From these details there appears to be only two jobs processing the sub-tasks even though a parallel_level of 5 was specified.   Why is that? Not enough data to break it up more? Back end resourcing?????

Looking at the details of the query in a tkprof listing of one of the jobs’ trace files you can see that each of the jobs is executing the same plan that was generated with the original pl/sql: , however they are running two tasks in parallel

 

insert /*+PARALLEL APPEND */ into target_table
           select * from
source_table
             where rowid between :start_id and :end_id


call     count       cpu   elapsed       disk     query   current       rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse       1     0.00       0.00         0         0         0           0
Execute     6     0.26       3.03         4       599       8128       17849
Fetch        0     0.00       0.00         0         0         0           0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total       7     0.26       3.03         4       599       8128       17849

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS   (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
         0         0         0 LOAD AS SELECT (cr=59 pr=0 pw=8 time=92689 us)
     1240       1240       1240   FILTER (cr=13 pr=0 pw=0 time=501 us)
     1240       1240       1240   TABLE ACCESS BY ROWID RANGE SOURCE_TABLE (cr=13 pr=0 pw=0 time=368 us cost=42 size=122892 card=228)

 

I think that I could do more testing in this area and try and determine answers to the following questions

 

Why is the parallel parameter not appearing to work?

Does it depend on data volumes or back-end resources?

 

What I was interested in seeing was what was the breakdown of chunks by row count – was it an even split?

select a.chunk_id, count(*)

from user_parallel_execute_chunks a,

source_table b

where a.TASK_NAME = 'TASK$_107'

and b.rowid between a.START_ROWID and a.END_ROWID

group by a.chunk_id

order by 1;

  CHUNK_ID   COUNT(*)
---------- ----------
       108       1407
       109       1233
       110       1217
       111       1193
       112       1192
       113       1274
       114       1213
       115       1589
       116       1191
       117       1226
       118       1190
       119       1201
       120       1259
       121       1273
       122       1528
       123       1176
       124      15874
       125       4316
       126      15933
       127       4283
       128       9055

Reasonably even with most chunks selecting around 1200 rows each but 2 chunks had 15K in each so not that well chunked.

Repeating the process but chunking by blocks this time produced the following results

 

 

SQL> select task_name, chunk_id, dbms_rowid.rowid_block_number(start_rowid) Start_Id,

2 dbms_rowid.rowid_block_number(end_rowid) End_Id,

3 dbms_rowid.rowid_block_number(end_rowid) – dbms_rowid.rowid_block_number(start_rowid) num_blocks

4 from user_parallel_execute_chunks order by task_name, chunk_id;

 

TASK_NAME             CHUNK_ID   START_ID     END_ID NUM_BLOCKS

——————– ———- ———- ———- ———-

 

TASK$_133                   134     70096     70103         7

TASK$_133                   135     70104     70111         7

TASK$_133                   136     70112     70119         7

TASK$_133                   137     70120     70127         7

TASK$_133                   138     70128     70135         7

TASK$_133                   139     70136     70143         7

TASK$_133                   140     73344     73351        7

TASK$_133                   141     73352     73359         7

TASK$_133                   142     73360     73367         7

TASK$_133                   143     73368     73375         7

TASK$_133                   144     73376     73383        7

TASK$_133                   145     73384     73391         7

TASK$_133                   146     73392     73399         7

TASK$_133                   147     73400     73407         7

TASK$_133                   148     73408     73415         7

TASK$_133                   149     73416     73423         7

TASK$_133                   150     73472     73521         49

TASK$_133                   151     73522     73571         49

TASK$_133                   152     73572     73599         27

TASK$_133                   153     73600     73649         49

TASK$_133                   154     73650     73699         49

TASK$_133                   155     73700     73727         27

TASK$_133                   156     73728      73777         49

TASK$_133                   157     73778     73827         49

TASK$_133                   158     73828     73855         27

 

So again not very well split. Perhaps there needs to be more data to make it worthwhile.

 

Error handling:  

 

Finally we can have a look at how it handles errors. To save the reader time the simple answer is ‘not very well’.

Let’s force a duplicate key error to show how the package handles errors. The original 80000+ rows are left in the target_table, and the same set of data will be reinserted – which will cause a unique constraint violation for the PK:

 

BEGIN

DBMS_PARALLEL_EXECUTE.create_task (task_name => DBMS_PARALLEL_EXECUTE.generate_task_name);

END;

/

 

PL/SQL procedure successfully completed.

 

 

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

where status = ‘CREATED’;

 

TASK_NAME                               CHUNK_TYPE   STATUS

—————————————- ———— ——————-

TASK$_170                               UNDELARED   CREATED

 

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => ‘TASK$_170’,-

table_owner => ‘SYS’,-

table_name => ‘SOURCE_TABLE’,-

by_row     => TRUE,-

chunk_size => 40000)

DECLARE

v_sql VARCHAR2(1024);

BEGIN

v_sql := ‘insert /*+PARALLEL APPEND */ into target_table

select * from source_table

where rowid between :start_id and :end_id’;

 

dbms_parallel_execute.run_task(task_name     => ‘TASK$_170’,

sql_stmt       => v_sql,

language_flag => DBMS_SQL.NATIVE,

parallel_level => 10);

END;

/

 

PL/SQL procedure successfully completed.

 

select count(*) from target_table;

 

COUNT(*)

———-

86353

No error is thrown by the run_task call, but the number of rows has not increased. Checking the status of the task shows that there was indeed an error:

 

select task_name, status from user_parallel_execute_tasks where task_name = ‘TASK$_170’);

 

TASK_NAME                               STATUS

—————————————- ——————-

TASK$_170                               FINISHED_WITH_ERROR

The error details are given in the user_parallel_execute_chunks view:

 

select TASK_NAME, ERROR_MESSAGE, STATUS

from user_parallel_execute_chunks

where TASK_NAME = ‘TASK$_170’ order by 2

 

TASK_NAME       ERROR_MESSAGE                                               STATUS

————— ———————————————————— —————————————-

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170      ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170                                                                   PROCESSED

TASK$_170                                                                   PROCESSED

 

28 rows selected.

SQL> /

 

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => ‘TASK$_133’,-

table_owner => ‘SYS’,-

table_name => ‘SOURCE_TABLE’,-

by_row     => FALSE,-

chunk_size => 20);


Using SYSBACKUP in 12c with a media manager layer

$
0
0

I think most large sites who have multiple support teams are aware of how the phrase “Segregation of Duties” is impacting the DBA world. The basic principle, that one user should not be able to, for instance, add a user, grant it privileges, let the user run scripts and then drop the user and remove all log files is a sound one and cannot be argued with.

With the release of 12c Oracle e added three new users to perform administrative tasks. Each user as a corresponding privilege with the same name as the user, which is a bit confusing.

SYSBACKUP – for RMAN backup and recovery work

SYSDG –  to manage DataGuard operations

SYSKM – to manage activities involving ‘key management’ including wallets and Database Vault

I have no real experience of key management so cannot comment on that. I do fail to see which type of user would be allowed to manage a DG setup and yet not be allowed to perform other DBA work on the databases, however it probably does mean that any requirement to login as ‘sysdba’ is now reduced which can only be a good thing.

 

The SYSBACKUP user is a really good idea and has been a long-time coming

The privileges it has, along with select on many sys views are

STARTUP
SHUTDOWN
ALTER DATABASE
ALTER SYSTEM
ALTER SESSION
ALTER TABLESPACE
CREATE CONTROLFILE
CREATE ANY DIRECTORY
CREATE ANY TABLE
CREATE ANY CLUSTER
CREATE PFILE
CREATE RESTORE POINT (including GUARANTEED restore points)
CREATE SESSION
CREATE SPFILE
DROP DATABASE
DROP TABLESPACE
DROP RESTORE POINT (including GUARANTEED restore points)
FLASHBACK DATABASE
RESUMABLE
UNLIMITED TABLESPACE
SELECT ANY DICTIONARY
SELECT ANY TRANSACTION

One aspect I was keen to look at was if we could amend the connect string we use in our Media Manager Layer  – Commvault from Simpana from having to connect using a ‘user/password as sysdba’ string

Unfortunately at the moment there is no way of changing the connect string to use the user SYSBACKUP. Simpana will be releasing Version 11 sometime later this year which will have be able to interact with the SYSBACKUP user, however I am unclear as to whether the requirement  to connect as SYSDBA will be removed or not.

I am not aware of how other MMLs such as Networker, Netbackup or Data Protector have been updated to include the 12c changes and I am keen to find out.


Identifying database link usage

$
0
0

As part of ongoing security reviews I wanted to determine if all database links on production systems were in use. That is not very easy to do and this article is a listing of some of the options I have considered to get that information and how it is now possible from 11GR2 onwards.

The first option was to look and see if auditing can be used. The manual states “You can audit statements that refer to tables, views, sequences, standalone stored procedures or functions, and packages, but not individual procedures within packages. (See “Auditing Functions, Procedures, Packages, and Triggers” for more information about auditing these types of objects.)

You cannot directly audit statements that reference clusters, database links, indexes, or synonyms. However, you can indirectly audit access to these schema objects, by auditing the operations that affect the base table.”

So you could audit activities on a base table that a database link might utilise, probably via a synonym. However that would show all table usage but it would be very difficult to break it down to see if a database link had been involved.

On the assumption that the code has a call to “@db_link_name” you could probably trawl ASH data or v$sql to see if a reference is available. It would be more likely that a synonym would be in use and as we have said above, we cannot audit synonym usage but you could maybe find it in v$sql. Again very work intensive with no guaranteed return.

There has been an enhancement request in MoS since 2006 – search for Bug 5098260

Jared Still posted a routine, although he does not claim to be the original author,  which shows a link being actually used. However in reality that is not really a good way of capturing information across many systems unless you enable an excessive amount of tracing or monitoring across all systems. I have demoed usage of it below and it does work

I’ve created a DB link from SNAPCL1A to SNAPTM1. First I opened the DB link:

 

select sysdate from dual@snaptm1;
SYSDATE
---------
22-SEP-15

 

I can see my DB link being opened in v$dblink (in my own session):

select a.db_link, u.username, logged_on, open_cursors, in_transaction, update_sent

from v$dblink a, all_users u

where a.owner_id = u.user_id;
DB_LINK                        USERNAME                       LOG OPEN_CURSORS IN_ UPD
------------------------------ ------------------------------ --- ------------ --- ---
SNAPTM1                        SYS                            YES            0 YES NO

The following script can be used to see open DB link sessions (on both databases). It can be executed from any session and it will only show open DB links (that have not been committed, rolled back or manually closed/terminated on the origin database):

col origin for a30

col "GTXID" for a30

col lsession for a10

col username for a20

col waiting for a50

Select /*+ ORDERED */

substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10)      "ORIGIN",

substr(g.K2GTITID_ORA,1,35) "GTXID",

substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,

s2.username,

decode(bitand(ksuseidl,11),

1,'ACTIVE',

0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),

2,'SNIPED',

3,'SNIPED',

'KILLED'

) "State",

substr(w.event,1,30) "WAITING"

from  x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2

where  g.K2GTDXCB =t.ktcxbxba

and   g.K2GTDSES=t.ktcxbses

and  s.addr=g.K2GTDSES

and  w.sid=s.indx

and s2.sid = w.sid;

 

On the origin database:

 

ORIGIN                         GTXID                          LSESSION   USERNAME             State    WAITING
------------------------------ ------------------------------ ---------- -------------------- -------- --------------------------------------------------
teora01x-3762                  SNAPCL1A.cc76ea8a.7.32.983     125.5      SYS                  INACTIVE SQL*Net message from client

 

On the destination database:

 

ORIGIN                         GTXID                          LSESSION   USERNAME             State    WAITING
------------------------------ ------------------------------ ---------- -------------------- -------- --------------------------------------------------
teora01x-4065                  SNAPCL1A.cc76ea8a.7.32.983     133.599    SYSTEM               INACTIVE SQL*Net message from client

 

Now, I rollback my session on the origin database:

 

SQL> rollback;
Rollback complete.

If I query the v$dblink view, I still see my link there, but the transaction is closed now:

 

select a.db_link, u.username, logged_on, open_cursors, in_transaction, update_sent

from v$dblink a, all_users u

where a.owner_id = u.user_id  2    3  ;

 

DB_LINK                        USERNAME             LOG OPEN_CURSORS IN_ UPD
------------------------------ -------------------- --- ------------ --- ---
SNAPTM1                        SYS                  YES            0 NO  NO

The script will not return anything at this point:

SQL> Select /*+ ORDERED */

substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10)      "ORIGIN",

substr(g.K2GTITID_ORA,1,35) "GTXID",

substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,

s2.username,

decode(bitand(ksuseidl,11),

1,'ACTIVE',

0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),

2,'SNIPED',

3,'SNIPED',

'KILLED'

) "State",

substr(w.event,1,30) "WAITING"

from  x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2

where  g.K2GTDXCB =t.ktcxbxba

and   g.K2GTDSES=t.ktcxbses

and  s.addr=g.K2GTDSES

and  w.sid=s.indx

and s2.sid = w.sid;

no rows selected 

However since at least 11.2.0.3 Oracle have provided a better means of identifying database link usage after the event and not just during

Databases TST11204 and QUICKIE on different servers  -both have dblinks to each other –

TST11204

create user dblinktest identified by Easter2012 ;

grant create session, create database link to dblinktest;

SQL> connect dblinktest/xxxxxxxxxx

Connected.

SQL>  select * from test@quickie;

C1 C2

---------- -----

5 five

create database link TST11204 connect to dblinkrecd identified by xxxxxx using 'TST11204';

select * from test@TST11204

At this point we have made a connection  – lets see what we can find out about it. I would advise using the timestamp# column of aud$ to reduce the volume of data that has to be searched.

SQL> select userid, terminal, comment$text from sys.aud$ where comment$text like 'DBLINK%';

 

USERID  TERMINAL        COMMENT$TEXT
--------------------------------------------------------------------------------
                        DBLINKRECD DBLINK_INFO: (SOURCE_GLOBAL_NAME=QUICKIE.25386385)

This information is in both source and target databases

It will return the source of a database link session. Specifically, it returns a string of the form:

SOURCE_GLOBAL_NAME=dblink_src_global_name, DBLINK_NAME=dblink_name, SOURCE_AUDIT_SESSIONID=dblink_src_audit_sessionid

where:

dblink_src_global_name is the unique global name of the source database

dblink_name is the name of the database link on the source database

dblink_src_audit_sessionid is the audit session ID of the session on the source database that initiated the connection to the remote database using dblink_name.

So hopefully that might help in identifying if a database link is still in use or when it was last used and it can be used as another part of your security toolkit.

 


Manage a PSU into a CDB having several PDBS

$
0
0

I thought it would be a good idea to show how to apply PSU into the CDB and PDB that came with 12c. I start off with a quick reminder about how 11g worked and then move into examples of 12c

11G reminder

get the latest version of opatch

check for conflicts

opatch prereq  CheckConflictAgainstOHWithDetail -ph ./

start the downtime

Stop all the databases in the home (one node at a atime for RAC)

apply the patch

opatch apply

start all databases in the home

load SQL into the databases

@catbundle.sql psu apply

end of downtime (but remember to do the standby)

Example of 12c PSU process

Update opatch to latest version

Download and apply patch 6880880 to the oracle home

Check for conflicts with one-offs

Run the prereq check for conflicts:

[CDB2] oracle@localhost:~/12cPSU/19303936

$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./

Oracle Interim Patch Installer version 12.1.0.1.8
Copyright (c) 2015, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0.2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.2/oraInst.loc
OPatch version   : 12.1.0.1.8
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/opatch2015-09-22_12-12-54PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

No conflicts should be reported.

Stop all dbs in the home (one node at a time for RAC)

If you are using a Data Guard Physical Standby database, you must install this patch on both the primary database and the physical standby database, as described by My Oracle Support Document 278641.1.

If this is a RAC environment, install the PSU patch using the OPatch rolling (no downtime) installation method as the PSU patch is rolling RAC installable. Refer to My Oracle Support Document 244241.1 Rolling Patch – OPatch Support for RAC.

If this is not a RAC environment, shut down all instances and listeners associated with the Oracle home that you are updating.

Apply the patch

[CDB2] oracle@localhost:~/12cPSU/19303936

$ opatch apply

Normal output here -nothing has changed

Start all dbs in the home

Start the CDB:

[setsid] to CDB
startup

If this is multitenant then start and PDBs that are not open:

select name, open_mode from v$pdbs order by 1;
NAME                                    OPEN_MODE
---------------------------------------- ------------------------------
PDB$SEED                                 READ ONLY
PDB1                                     MOUNTED
PDB2                                     MOUNTED
alter pluggable database [all|<<name>>] open;

Start the listener(s) if it was stopped too.

Load SQL into all dbs

Prior to 12c you needed to connect to all databases individually and run catbundle.sql psu apply. Now in 12c, you only need to run datapatch –verbose. This will connect to the CDB$ROOT, PDB$SEED and all **open** PDBs and run the SQL updates:

cd $ORACLE_HOME/OPatch
[CDB2] oracle@localhost:/u01/app/oracle/product/12.1.0.2/OPatch
$ ./datapatch -verbose

 

SQL Patching tool version 12.2.0.0.0 on Tue Sep 22 12:49:21 2015
Copyright (c) 2014, Oracle. All rights reserved.

 

Connecting to database...OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
catcon: ALL catcon-related output will be written to /tmp/sqlpatch_catcon__catcon_12889.lst
catcon: See /tmp/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /tmp/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions...done
Determining current state...done

 

Current state of SQL patches:
Bundle series PSU:
ID 1 in the binary registry and not installed in any PDB

 

Adding patches to installation queue and performing prereq checks...
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
   Nothing to roll back
   The following patches will be applied:
     19303936 (Database Patch Set Update : 12.1.0.2.1 (19303936))

 

Installing patches...
Patch installation complete. Total patches installed: 3

 

Validating logfiles...
Patch 19303936 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_CDBROOT_2015Sep22_12_50_10.log (no errors)
Patch 19303936 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_PDBSEED_2015Sep22_12_50_14.log (no errors)
Patch 19303936 apply (pdb PDB1): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_PDB1_2015Sep22_12_50_15.log (no errors)
SQL Patching tool complete on Tue Sep 22 12:50:19 2015

Note that PDB2 was not picked up that is because I left it as MOUNTED and not open.

So what happens now if I try and mount it?

[CDB2] oracle@localhost:/oradata/diag/rdbms/cdb2/CDB2/trace
$ sqlplus / as sysdba

 

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 22 13:13:30 2015

 

Copyright (c) 1982, 2014, Oracle. All rights reserved.

 

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

SQL> show pdbs

 

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                      READ ONLY NO
         3 PDB2                           MOUNTED
         4 PDB1                           READ WRITE NO

 

  SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;

  no rows selected

SQL> alter pluggable database pdb2 open;

 

Warning: PDB altered with errors.

 

SQL> show pdbs

 

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY NO
         3 PDB2                           READ WRITE YES
         4 PDB1                           READ WRITE NO
SQL> col time for a20
SQL> col name for a20
SQL> col cause for a20 wrap
SQL> col status for a20
SQL> col message for a60 wrap
SQL> set lines 200
SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS where cause='SQL Patch' order by name;

 

TIME                 NAME                 CAUSE               STATUS               MESSAGE
-------------------- -------------------- -------------------- -------------------- ------------------------------------------------------------
22-SEP-15 01.14.51.4 PDB2                 SQL Patch           PENDING             PSU bundle patch 1: Installed in Database Patch Set Update :
18558 PM                                                                             12.1.0.2.1 (19303936) but not in the CDB. 

So it tells us that PDB2 does not have the PSU installed.

The fix here is to rerun datapatch now that PDB2 is open:

cd /u01/app/oracle/product/12.1.0.2/OPatch
[CDB2] oracle@localhost:/u01/app/oracle/product/12.1.0.2/OPatch
$ ./datapatch -verbose
SQL Patching tool version 12.2.0.0.0 on Tue Sep 22 13:19:15 2015
Copyright (c) 2014, Oracle. All rights reserved.

 

Connecting to database...OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
catcon: ALL catcon-related output will be written to /tmp/sqlpatch_catcon__catcon_14499.lst
catcon: See /tmp/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /tmp/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions...done
Determining current state...done

 

Current state of SQL patches:
Bundle series PSU:
ID 1 in the binary registry and ID 1 in PDB CDB$ROOT, ID 1 in PDB PDB$SEED, ID 1 in PDB PDB1

 

Adding patches to installation queue and performing prereq checks...
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
   Nothing to roll back
   Nothing to apply
For the following PDBs: PDB2
   Nothing to roll back
   The following patches will be applied:
     19303936 (Database Patch Set Update : 12.1.0.2.1 (19303936))

 

Installing patches...
Patch installation complete. Total patches installed: 1

 

Validating logfiles...
Patch 19303936 apply (pdb PDB2): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_PDB2_2015Sep22_13_19_47.log (no errors)
SQL Patching tool complete on Tue Sep 22 13:19:49 2015

This time it only patched PDB2 and skipped over the others.

Now what does the database think?

[CDB2] oracle@localhost:/u01/app/oracle/product/12.1.0.2/OPatch
$ sqlplus / as sysdba

 

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 22 13:21:11 2015

 

Copyright (c) 1982, 2014, Oracle. All rights reserved.

 


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

SQL> show pdbs

 

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY NO
         3 PDB2                           READ WRITE YES
         4 PDB1                           READ WRITE NO
SQL> col time for a20
SQL> col name for a20
SQL> col cause for a20 wrap
SQL> col status for a20
SQL> col message for a60 wrap
SQL> set lines 200
SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;

 

TIME                 NAME                 CAUSE               STATUS               MESSAGE
-------------------- -------------------- -------------------- -------------------- ------------------------------------------------------------
22-SEP-15 01.14.51.4 PDB2                 SQL Patch           PENDING             PSU bundle patch 1: Installed in Database Patch Set Update :
18558 PM                                                                             12.1.0.2.1 (19303936) but not in the CDB.

Nothing, so I need to bounce PDB2:

SQL> alter pluggable database pdb2 close;

 

Pluggable database altered.

 

SQL> alter pluggable database pdb2 open;

 

Pluggable database altered.

SQL> show pdbs

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY NO
       3 PDB2                           READ WRITE NO
         4 PDB1                           READ WRITE NO

 

SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;

 

TIME                 NAME                 CAUSE                STATUS               MESSAGE
-------------------- -------------------- -------------------- -------------------- ------------------------------------------------------------
22-SEP-15 01.23.24.0 PDB2                 SQL Patch           RESOLVED            PSU bundle patch 1: Installed in Database Patch Set Update :
47678 PM                                                                             12.1.0.2.1 (19303936) but not in the CDB.

The row is still returned but the STATUS is now RESOLVED.

Moving PDB between PSU versions

In this example we will move PDB1 from CDB1 to CDB2 which is at a higher PSU.

Create new home

Create separate ORACLE_HOME and CDB2 and apply PSU at version higher than existing.

Run datapatch on new home to apply PSU to CDB$ROOT and PDB$SEED.

 

Stop application and existing PDB

Downtime will start now.

Stop the PDB1:

setsid [CDB1]
sysdba
ALTER PLUGGABLE DATABASE PDB1 CLOSE;
Pluggable database altered.

Unplug PDB1 from CDB1

Unplug the PDB into metadata xml file:

ALTER SESSION SET CONTAINER = CDB$ROOT;
Session altered.
SQL> ALTER PLUGGABLE DATABASE PDB1 UNPLUG INTO '/u01/oradata/CDB1/PDB1/PDB1.xml';
Pluggable database altered

Plug PDB1 into CDB2.

Plug metadata into CDB2:

setsid [CDB2]
sysdba
SQL> CREATE PLUGGABLE DATABASE PDB1
     USING '/u01/oradata/CDB1/PDB1/PDB1.xml'
     MOVE FILE_NAME_CONVERT = ('/u01/oradata/CDB1/PDB1','/u01/oradata/CDB2/PDB1');
Pluggable database created.

The use of the MOVE clause makes the new pluggable database creation very quick, since the database files are not copied but only moved on the file system. This operation is immediate if using the same file system.

Now open up PDB1 on CDB2:

SQL> ALTER PLUGGABLE DATABASE PDB1 OPEN;
Pluggable database altered

Load modified SQL files into the database with Datapatch tool

Run datapatch to load SQL into PDB1

setsid [CDB2]
cd $ORACLE_HOME/OPatch
./datapatch -verbose

Start application

Downtime ends and the application can be restarted

Inventory Reporting

New at 12c

Simon Pane at Pythian has produced a very useful script which we converted into a shell script http://www.pythian.com/blog/oracle-database-12c-patching-dbms_qopatch-opatch_xml_inv-and-datapatch/

You can now query what is applied to both the home and the database by querying just the database at 12c.


 

List of PSUs applied to both the $OH and the DB

 


NAME             PATCH_ID PATCH_UID ROLLBACK STATUS         DESCRIPTION
--------------- ---------- ---------- -------- --------------- ------------------------------------------------------------
JH1PDB           19769480   18350083 true    SUCCESS         Database Patch Set Update : 12.1.0.2.2 (19769480)
JHPDB             19769480   18350083 true     SUCCESS         Database Patch Set Update : 12.1.0.2.2 (19769480)
CDB$ROOT         19769480   18350083 true     SUCCESS         Database Patch Set Update : 12.1.0.2.2 (19769480)

 


PSUs installed into the $OH but not applied to the DB

 


NAME             PATCH_ID PATCH_UID DESCRIPTION
--------------- ---------- ---------- ------------------------------------------------------------
JH2PDB            19769480   18350083 Database Patch Set Update : 12.1.0.2.2 (19769480)

 


PSUs applied to the DB but not installed into the $OH

 


no rows selected

Note – PDB$SEED is normally READ ONLY and hence no record is returned from the CDB_REGISTRY_SQLPATCH view, so this PDB is excluded from this SQL.



Purge_stats – a worked example of improving performance and reducing storage

$
0
0

I have written a number of blog entries around managing the SYSAUX tablespace and more specifically the stats tables that are in there.

https://jhdba.wordpress.com/2014/07/09/tidying-up-sysaux-removing-old-snapshots-which-you-didnt-know-existed

https://jhdba.wordpress.com/2009/05/19/purging-statistics-from-the-sysaux-tablespace

https://jhdba.wordpress.com/2011/08/23/939/

This is another one around the same theme. What I offer in this one is some new scripts to identify what is working and what is not, along with a worked example with real numbers and timings.

Firstly let’s define the problem. This is a large 11.2.0.4 database running on an 8 node Exadata cluster, lots of daily loads and an ongoing problem in keeping statistics up to date. The daily automatic maintenance windows runs for 5 hours Mon-Fri and 20 hours each weekend day. Around 50% of that midweek window is spent purging stats which doesn’t leave a lot of time for much else.

How do I know that?

col CLIENT_NAME form a25
COL MEAN_JOB_DURATION form a35
col MAX_DURATION_LAST_30_DAYS form a35
col MEAN_JOB_CPU form a35
set lines 180
col window_name form a20
col job_info form a70

 

SQL> select client_name from dba_autotask_client;
 CLIENT_NAME
----------------------------------------------------------------
sql tuning advisor
auto optimizer stats collection
auto space advisor
 
SQL>select client_name, MEAN_JOB_DURATION, MEAN_JOB_CPU, MAX_DURATION_LAST_30_DAYS from dba_autotask_client;
CLIENT_NAME         MEAN_JOB_DURATION                   MEAN_JOB_CPU                        MAX_DURATION_LAST_7_DAYS
-------------------- ----------------------------------- ----------------------------------- -----------------------------------
auto optimizer stats +000000000 03:37:36.296875000       +000000000 01:35:20.270514323       +000 19:59:56
auto space advisor   +000000000 00:12:49.440677966       +000000000 00:04:45.584406780
sql tuning advisor   +000000000 00:11:53.266666667       +000000000 00:10:22.344666667
 
SQL>select client_name,window_name, job_info from DBA_AUTOTASK_JOB_HISTORY order by window_start_time
 CLIENT_NAME         WINDOW_NAME         JOB_INFO
-------------------- -------------------- ---------------------------------------------------------------------------
 auto optimizer stats WEDNESDAY_WINDOW     REASON="Stop job called because associated window was closed"
 collection
 auto optimizer stats THURSDAY_WINDOW     REASON="Stop job called because associated window was closed"
 collection
 auto optimizer stats FRIDAY_WINDOW       REASON="Stop job called because associated window was closed"
 collection
auto optimizer stats SATURDAY_WINDOW     REASON="Stop job called because associated window was closed"
 collection
 auto optimizer stats SUNDAY_WINDOW       REASON="Stop job called because associated window was closed"

 So we know that the window is timing out and that the stats collection is taking an average of 3 hours. Another way is seeing how long the individual jobs take

Col operation format a60
Col start_time format a20
Col end_time form a20
Col duration form a10
alter session set nls_date_format = 'dd-mon-yyyy hh24:mi';
     select operation||decode(target,null,null,' for '||target) operation
          ,cast(start_time as date) start_time
          ,cast(end_time as date) end_time
          ,to_char(floor(abs(cast(end_time as date)-cast(start_time as date))*86400/3600),'FM09')||':'||
          to_char(floor(mod(abs(cast(end_time as date)-cast(start_time as date))*86400,3600)/60),'FM09')||':'||
         to_char(mod(abs(cast(end_time as date)-cast(start_time as date))*86400,60),'FM09') as DURATION
    from dba_optstat_operations
    where start_time between to_date('20-sep-2015 16','dd-mon-yyyy hh24') and to_date('10-oct-2015 21','dd-mon-yyyy hh24')
    and operation ='purge_stats'
 
OPERATION                                                   START_TIME           END_TIME             DURATION
------------------------------------------------------------ -------------------- -------------------- ----------
purge_stats                                                 30-sep-2015 17:31   30-sep-2015 20:02   02:31:01
purge_stats                                                  02-oct-2015 17:22   02-oct-2015 20:00   02:37:42
purge_stats                                                 28-sep-2015 18:04   28-sep-2015 20:38   02:34:18
purge_stats                                                 29-sep-2015 17:35   29-sep-2015 20:00   02:25:04
purge_stats                                                 05-oct-2015 17:53   05-oct-2015 20:48   02:55:20
purge_stats                                                 01-oct-2015 17:17   01-oct-2015 19:28   02:11:17
 
6 rows selected.

Another (minor) factor was how much space the optimizer stats were taking in the SYSAUX tablespace. Over 500Gb which does seem a lot for 14 days

COLUMN "Item" FORMAT A25
COLUMN "Space Used (GB)" FORMAT 999.99
COLUMN "Schema" FORMAT A25
COLUMN "Move Procedure" FORMAT A40

 

     SELECT occupant_name "Item",
     space_usage_kbytes/1048576 "Space Used (GB)",
     schema_name "Schema",
     move_procedure "Move Procedure"
     FROM v$sysaux_occupants
     WHERE occupant_name in ('SM/OPTSTAT')
     ORDER BY 1;

 


Item                     Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/OPTSTAT                         527.17 SYS

I ran through some of the techniques mentioned in the posts above – checking the retention period was 14 days, checking we had no data older than that, checking that tables were being partitioned properly and split into new partitions. No obvious answer as to why the purge was taking 2.5 hours every day.   I therefore set a trace up, ran the purge manually and viewed the tkprof file

ALTER SESSION SET TRACEFILE_IDENTIFIER = "JOHN";
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
set timing on
exec dbms_stats.purge_stats(sysdate-14);

 

There seemed to be quite a few entries for sql_ids similar to the one below –each taking around 10 minutes to insert data into a partition

SQL ID: 66yqxrmjwfsnr Plan Hash: 3442357004
insert /*+ RELATIONAL("WRI$_OPTSTAT_HISTHEAD_HISTORY")
PARALLEL("WRI$_OPTSTAT_HISTHEAD_HISTORY",1) APPEND NESTED_TABLE_SET_SETID
NO_REF_CASCADE */ into "SYS"."WRI$_OPTSTAT_HISTHEAD_HISTORY" partition
("SYS_P894974") ("OBJ#","INTCOL#","SAVTIME","FLAGS","NULL_CNT","MINIMUM",
"MAXIMUM","DISTCNT","DENSITY","LOWVAL","HIVAL","AVGCLN","SAMPLE_DISTCNT",
"SAMPLE_SIZE","TIMESTAMP#","EXPRESSION","COLNAME","SPARE1","SPARE2",
"SPARE3","SPARE4","SPARE5","SPARE6") (select /*+
RELATIONAL("WRI$_OPTSTAT_HISTHEAD_HISTORY")
PARALLEL("WRI$_OPTSTAT_HISTHEAD_HISTORY",1) */ "OBJ#" ,"INTCOL#" ,"SAVTIME"
,"FLAGS" ,"NULL_CNT" ,"MINIMUM" ,"MAXIMUM" ,"DISTCNT" ,"DENSITY" ,"LOWVAL" ,
"HIVAL" ,"AVGCLN" ,"SAMPLE_DISTCNT" ,"SAMPLE_SIZE" ,"TIMESTAMP#" ,
"EXPRESSION" ,"COLNAME" ,"SPARE1" ,"SPARE2" ,"SPARE3" ,"SPARE4" ,"SPARE5" ,
"SPARE6" from "SYS"."WRI$_OPTSTAT_HISTHEAD_HISTORY" partition
("SYS_P894974") ) delete global indexes
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1     71.29     571.77     498402      26399    1322741           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     71.29     571.78     498402      26399    1322741           0

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  LOAD AS SELECT  (cr=26399 pr=498402 pw=60633 time=571778609 us)
   5398837    5398837    5398837   PARTITION RANGE SINGLE PARTITION: 577 577 (cr=22524 pr=22504 pw=0 time=9824150 us cost=3686 size=399126189 card=5183457)
   5398837    5398837    5398837    TABLE ACCESS FULL WRI$_OPTSTAT_HISTHEAD_HISTORY PARTITION: 577 577 (cr=22524 pr=22504 pw=0 time=9169817 us cost=3686 size=399126189 card=5183457)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                    415262        0.16        458.40
  db file scattered read                        370        0.16         11.96
  direct path write temp                       2707        0.16         36.48
  db file parallel read                           2        0.02          0.04
  direct path read temp                        3742        0.04          8.15
********************************************************************************

I then spent sometime reviewing that table and it’s partitions – they seemed well balanced

SQL> select partition_name from dba_tab_partitions where table_name = 'WRI$_OPTSTAT_HISTHEAD_HISTORY';

 

PARTITION_NAME
------------------------------
SYS_P909952
SYS_P908538
SYS_P906715
SYS_P905214
P_PERMANENT

 

select PARTITION_NAME,HIGH_VALUE,TABLESPACE_NAME,NUM_ROWS,LAST_ANALYZED
from DBA_TAB_PARTITIONS where table_owner ='SYS' and table_name=''WRI$_OPTSTAT_HISTHEAD_HISTORY' order by 1;

 

PARTITION_NAME                 HIGH_VALUE                                                                       TABLESPACE_NAME               NUM_ROWS LAST_ANAL
------------------------------ -------------------------------------------------------------------------------- ------------------------------ ---------- ---------
P_PERMANENT                   TO_D
ATE(' 2014-02-23 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX0 15-SEP-15
SYS_P905214                   TO_DATE(' 2015-09-29 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX 5026601 29-SEP-15
SYS_P906715                   TO_DATE(' 2015-09-30 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX 5178283 30-SEP-15
SYS_P908538                   TO_DATE(' 2015-10-01 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX 4977859 01-OCT-15
SYS_P909952                   TO_DATE(' 2015-10-09 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX

 

select trunc(SAVTIME), count(*) from WRI$_OPTSTAT_HISTGRM_HISTORY group by trunc(SAVTIME) order by 1
TRUNC(SAV   COUNT(*)
--------- ----------
18-SEP-15   22741579
19-SEP-15   24299504
20-SEP-15   22816509
21-SEP-15   24200455
22-SEP-15   24156330
23-SEP-15   24407643
24-SEP-15   23469620
25-SEP-15   23221382
26-SEP-15   25372495
27-SEP-15   23144212
28-SEP-15   23522809
29-SEP-15   24362715
30-SEP-15   25418527
01-OCT-15   24383030

I decided to rebuild the index I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST really only because it seemed to be very big for the number of rows it was containing – 192Gb


col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
where tablespace_name = 'SYSAUX'
and segment_name like 'WRI$_OPTSTAT%'
and segment_type='TABLE'
group by segment_name,segment_type order by 1 asc
/
 
        MB SEGMENT_NAME                             SEGMEN
---------- ---------------------------------------- ------
         0 WRI$_OPTSTAT_SYNOPSIS_PARTGRP            TABLE
         0 WRI$_OPTSTAT_AUX_HISTORY                 TABLE
        72 WRI$_OPTSTAT_OPR                         TABLE
       385 WRI$_OPTSTAT_SYNOPSIS_HEAD$              TABLE
       845 WRI$_OPTSTAT_IND_HISTORY                 TABLE
     1,221 WRI$_OPTSTAT_TAB_HISTORY                 TABLE

 

select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='INDEX'
group by segment_name,segment_type order by 1
/        MB SEGMENT_NAME                             SEGMEN
---------- ---------------------------------------- ------
         0 I_WRI$_OPTSTAT_AUX_ST                    INDEX
         0 I_WRI$_OPTSTAT_SYNOPPARTGRP              INDEX
         0 WRH$_PLAN_OPTION_NAME_PK                 INDEX
         0 WRH$_OPTIMIZER_ENV_PK                    INDEX
        43 I_WRI$_OPTSTAT_OPR_STIME                 INDEX
       293 I_WRI$_OPTSTAT_SYNOPHEAD                 INDEX
       649 I_WRI$_OPTSTAT_IND_ST                    INDEX
       801 I_WRI$_OPTSTAT_IND_OBJ#_ST               INDEX
       978 I_WRI$_OPTSTAT_TAB_ST                    INDEX
     1,474 I_WRI$_OPTSTAT_TAB_OBJ#_ST               INDEX
     4,968 I_WRI$_OPTSTAT_HH_ST                     INDEX
     6,807 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST            INDEX
   129,509 I_WRI$_OPTSTAT_H_ST                      INDEX
   192,304 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST           INDEX

That index dropped to 13Gb – needless to say I continued with the rest

         MB SEGMENT_NAME                             SEGMENT
---------- ---------------------------------------- ------
         0 WRH$_OPTIMIZER_ENV_PK                    INDEX
         0 I_WRI$_OPTSTAT_AUX_ST                    INDEX
         0 I_WRI$_OPTSTAT_SYNOPPARTGRP              INDEX
         0 WRH$_PLAN_OPTION_NAME_PK                 INDEX
         2 I_WRI$_OPTSTAT_OPR_STIME                 INDEX
        39 I_WRI$_OPTSTAT_IND_ST                    INDEX
        47 I_WRI$_OPTSTAT_IND_OBJ#_ST               INDEX
        56 I_WRI$_OPTSTAT_TAB_ST                    INDEX
        72 I_WRI$_OPTSTAT_TAB_OBJ#_ST               INDEX
       200 I_WRI$_OPTSTAT_SYNOPHEAD                 INDEX
       482 I_WRI$_OPTSTAT_HH_ST                     INDEX
       642 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST            INDEX
     9,159 I_WRI$_OPTSTAT_H_ST                      INDEX
    12,560 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST           INDEX

An overall saving of 300Gb

Item                     Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/OPTSTAT                         214.12 SYS

 

I wasn’t too hopeful that this would benefit the purge routine because in the trace file above you can see that it is not using an index.

exec dbms_stats.purge_stats(sysdate-14);
Elapsed: 01:47:17.15

Better but not that much so I rebuilt the tables using a move command and parallel 8 and then rebuilt the indexes again. I intentionally left the table and indexes with the parallel setting that it acquires from the move/rebuild just for testing purposes.

select 'alter table '||segment_name||' move tablespace SYSAUX parallel 8;' from dba_segments where tablespace_name = 'SYSAUX' and segment_name like '%OPT%' and segment_type='TABLE'

 

alter table WRI$_OPTSTAT_TAB_HISTORY move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_IND_HISTORY move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_AUX_HISTORY move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_OPR move tablespace SYSAUX parallel 8;
alter table WRH$_OPTIMIZER_ENV move tablespace SYSAUX parallel 8;
alter table WRH$_PLAN_OPTION_NAME move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_SYNOPSIS_PARTGRP move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_SYNOPSIS_HEAD$ move tablespace SYSAUX parallel 8;

Then rebuilt the indexes as above

Now we see the space benefits. Space down to 182Gb from 527Gb – 345Gb saved


 

Item                      Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/OPTSTAT                         182.61 SYS

The first purge with no data to purge was instantaneous

SQL> exec dbms_stats.purge_stats(sysdate-14);
Elapsed: 00:00:00.91

 

The second which was purging a days worth only took 9 minutes

SQL> exec dbms_stats.purge_stats(sysdate-13);
Elapsed: 00:09:40.65

I then put the tables and indexes back to a degree of 1 and ran a purge of another days worth of data which showed a consistent timing

SQL> exec dbms_stats.purge_stats(sysdate-12);
Elapsed: 00:09:32.05

It is clear from the number of posts on managing optimizer stats and their history that maintenance and management is required and I hope my set of posts help readers to do just that.

The following website has some very good information on it http://maccon.wikidot.com/sysaux-stats and I must credit my colleague Andy Challen  for generating some good ideas and providing a couple of scripts. Talking through a plan of action does seem to be a good way of working.

 


12.1.0.2 enhancement – large trace file produced when cursor becomes obsolete

$
0
0

A minor change came in with 12.1.0.2 which is causing large trace files to be produced under certain conditions.

The traces are produced as a result of an enhancement introduced in an unpublished bug.

The aim of the bug is to improve cursor sharing diagnostics by dumping information about an obsolete parent cursor and it’s child cursors after the parent cursor has been obsoleted N times.
A parent cursor will be marked as obsolete after a certain number of child cursors have been produced under that parent as defined by the parameter “_cursor_obsolete_threshold”. In 12.1, the default is 1024 child cursors.

A portion of the trace file is shown below – we are only seeing it in a 12.1.0.2  OEM database at the moment and the problem with obsoleted parent cursors is not affecting us in a noticeable manner, although the size of the trace files and their frequency is.

----- Cursor Obsoletion Dump sql_id=a1kj6vkgvv3ns -----
Parent cursor obsoleted 1 time(s). maxchild=1024 basephd=0xc4720a8f0 phd=0xc4720a8f0

 

The SQL being:

SELECT TP.PROPERTY_NAME, TP.PROPERTY_VALUE FROM MGMT_TARGET_PROPERTIES TP, MGMT_TARGETS T WHERE TP.TARGET_GUID = T.TARGET_GUID AND T.TARGET_NAME = :B2 AND T.TARGET_TYPE = :B1

 

The ‘feature’ is used to help Oracle support track issues with cursor sharing.

There is a parameter which can be used to stop or reduce the frequency of traces from MoS note 1955319.1

The dump can be controlled by the parameter “_kks_obsolete_dump_threshold” that can have a value between 0 and 8.

When the value is equal to 0, the obsolete cursor dump will be disabled completely:

alter system set "_kks_obsolete_dump_threshold" = 0;

When set to a value N between 1 and 8, the cursor will be dumped after cursor has been obsoleted N times:
By default, a parent cursor that is obsoleted is dumped when the parent SQL is obsoleted first time, i.e., N is 1.

alter system set "_kks_obsolete_dump_threshold" = 8;

 

The work around is to set the below underscore parameter that controls when the cursor will be dumped with a range of 0 to 8, 0 being disabled and 8 being after 8 iterations of being made obsolete etc.

We will be raising a support call re the cursor problem but probably setting the dump threshold to 8 at first and then 0 if we still keep on getting large traces. The default is set to 1.


Why do you need to resetlogs after a cold backup restore

$
0
0

I posted a routine on how to take a cold backup locally to disk and then restore it back in 2010. Last week I was asked in a comment ‘why did you have to open the database using resetlogs?’  A very good question I thought so I proceeded to backup and recover just as the blog showed and I now know why.

Because Oracle will not let you do otherwise

Let me run through the example again and I will add a bit of commentary.

The original blog entry was https://jhdba.wordpress.com/2010/03/22/recovering-from-a-cold-backup-using-rman/

The basis of the commands were

startup nomount
run
 {
 allocate channel c1 device type disk format ‘/staging/oretail/KEEP_UNTIl_15APRIL2010/backup_db_%d_S_%s_P_%p_T_%t’;
 restore controlfile from ‘/staging/oretail/KEEP_UNTIl_15APRIL2010/backup_db_MOMPRE1A_S_37_P_1_T_713884663’;
 }
alter database mount;
run
 {
 allocate channel c1 device type disk format ‘/staging/oretail/KEEP_UNTIl_15APRIL2010/backup_db_%d_S_%s_P_%p_T_%t’;
 allocate channel c2 device type disk format ‘/staging/oretail/KEEP_UNTIl_15APRIL2010/backup_db_%d_S_%s_P_%p_T_%t’;
 restore database tag=’rman_backfulcold_MOMPREPERTEST’;
 }
–YOU DO NOT NEED TO RECOVER DATABASE otherwise it rolls forward to the current time
alter database open resetlogs;

As you can see I open the database with a resetlogs and yet it is a completely clean recovery and I should be able to do a noresetlogs.

Let’s go through a similar example very quickly.

 

startup mount
run
 {
 allocate channel c1 device type disk format '/testmaster_data/KEEP_UNTIL_JOHN/backup_db_%d_S_%s_P_%p_T_%t';
 allocate channel c2 device type disk format '/testmaster_data/KEEP_UNTIL_JOHN/backup_db_%d_S_%s_P_%p_T_%t';
 backup full
 tag rman_backfulcold_TST11204
 (database include current controlfile);
 }

 

List the backups to see what we have got

RMAN> List backup summary;

903     B  F  A DISK        2015-10-15:13:18:22 1       1       NO         RMAN_BACKFULCOLD_TST11204
904     B  F  A DISK        2015-10-15:13:18:29 1       1       NO         TAG20151015T131828


RMAN> list backupset 903;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
903     Full    12.40G     DISK        00:12:49     2015-10-15:13:18:22
        BP Key: 903   Status: AVAILABLE  Compressed: NO  Tag: RMAN_BACKFULCOLD_TST11204
        Piece Name: /testmaster_data/KEEP_UNTIL_JOHN/backup_db_TST11204_S_914_P_1_T_893163933
  List of Datafiles in backup set 903
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  2       Full 10931985470038 2015-10-15:13:04:20 +DATA/tst11204/datafile/sysaux.257.851337533
  3       Full 10931985470038 2015-10-15:13:04:20 +DATA/tst11204/datafile/undotbs1.258.851337533
  4       Full 10931985470038 2015-10-15:13:04:20 +DATA/tst11204/datafile/users.259.851337533

RMAN> list backupset 904;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
904     Full    11.23M     DISK        00:00:01     2015-10-15:13:18:29
        BP Key: 904   Status: AVAILABLE  Compressed: NO  Tag: TAG20151015T131828
        Piece Name: +FRA/tst11204/autobackup/2015_10_15/s_893163860.3783.893164709
  Control File Included: Ckp SCN: 10931985470038   Ckp time: 2015-10-15:13:04:20
  SPFILE Included: Modification time: 2015-10-15:13:05:29
  SPFILE db_unique_name: TST11204
 

I now restore the controlfile from autobackup, startup mount and restore from the cold backup

RMAN> run
 {
 allocate channel c1 device type disk format '/testmaster_data/KEEP_UNTIL_JOHN/backup_db_%d_S_%s_P_%p_T_%t';
 restore controlfile from autobackup;
 }2> 3> 4> 5>
allocated channel: c1
 channel c1: SID=10 device type=DISK
Starting restore at 2015-10-15:15:19:00
recovery area destination: +FRA
 database name (or database unique name) used for search: TST11204
 channel c1: AUTOBACKUP +FRA/TST11204/AUTOBACKUP/2015_10_15/s_893170083.1419.893171427 found in the recovery area
 AUTOBACKUP search with format "%F" not attempted because DBID was not set
 channel c1: restoring control file from AUTOBACKUP +FRA/TST11204/AUTOBACKUP/2015_10_15/s_893170083.1419.893171427
 channel c1: control file restore from AUTOBACKUP complete
 output file name=+DATA/tst11204/controlfile/current.260.851337899
 Finished restore at 2015-10-15:15:19:03
 released channel: c1
RMAN> startup mount;
database is already started
 database mounted
RMAN>
 run
 {
 allocate channel c1 device type disk format '/testmaster_data/KEEP_UNTIL_JOHN/backup_db_%d_S_%s_P_%p_T_%t';
 allocate channel c2 device type disk format '/testmaster_data/KEEP_UNTIL_JOHN/backup_db_%d_S_%s_P_%p_T_%t';
 restore database;
 }
 RMAN> 2> 3> 4> 5> 6>

I now try to open the database uisng NORESETLOGS

RMAN> alter database open noresetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "resetlogs, ;"
RMAN-01008: the bad identifier was: noresetlogs
RMAN-01007: at line 1 column 21 file: standard input

RMAN>  alter database open resetlogs;

database opened

The default syntax for the ALTER DATABASE OPEN is READ, WRITE, NORESETLOGS and yet the command is not even allowed when you have recovered using a backup controlfile. You may think there was a mistype in the command above that caused it to fail but I and other DBAs typed it in several times and tried every permutation that exists. it is only when you type in RESETLOGS that the database takes the command as being valid.

I did wonder whether the autobackup was the problem so I tried a direct recovery of the controlfile from the backup piece

RMAN> restore controlfile from '+FRA/tst11204/autobackup/2015_10_15/s_893163860.3783.893164709';
RMAN> restore database ......
RMAN>
RMAN> alter database open norestlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "resetlogs, ;"
RMAN-01008: the bad identifier was: norestlogs
RMAN-01007: at line 1 column 21 file: standard input

RMAN> alter database open resetlogs;

database opened

RMAN>

Pretty much the same result  – Oracle will not allow a NORESETLOGS to happen. So why is this? From the documentation on the ALTER DATABASE OPEN command

RESETLOGS | NORESETLOGS
========================
This clause determines whether Oracle Database resets the current log sequence number to 1, archives any unarchived logs (including the current log), and discards any redo information that was not applied during recovery, ensuring that it will never be applied. Oracle Database uses NORESETLOGS automatically except in the following specific situations, which require a setting for this clause:

You must specify RESETLOGS:

After performing incomplete media recovery or media recovery using a backup
controlfile

There is a note on Metalink 1077022.1 which shows how you can get around it which I show below. To be honest I am not sure when you would need it but here it is anyway.

      RMAN> restore controlfile......
      RMAN> mount database ;
      RMAN> restore database ;
  • Copy the online redo logs to the desired location for new database.
  • Login to SQL*Plus and generate controlfile trace script ( please note that the database is mounted from rman after restoring controlfile ) :
      SQL> alter database backup controlfile to trace
            NORESETLOGS as '/tmp/ctl.sql' ;
      SQL> SHUTDOWN IMMEDIATE
  • Edit the controlfile if required. For example, to change the location of online redo logs copied.
  • Shutdown and STARTUP NOMOUNT the database and run the create controlfile script :
         SQL> STARTUP NOMOUNT
         SQL> @/tmp/ctl.sql
  •  Recover the database and open normal :
SQL> RECOVER DATABASE ;
 SQL> ALTER DATABASE OPEN ;

It is years since I ran a create controlfile script – the duplicate database option from RMAN has removed the need for that most of the time.

Another note referring to ORA-01610  – Doc ID 19007.1 also reiterates that when recovery has used a backup controlfile then a resetlogs is mandatory.

 


Startup PDB databases automatically when a container is started – good idea?

$
0
0

I posed a note on the Oracle-L Mailing list around pluggable database and why they were not opened automatically by default when the container database was opened. The post is below

I am trying to get my head around the thing about how pluggable databases react after the container database is restarted.

Pre 12.1.0.2 it was necessary to put a startup trigger in to run a ‘alter pluggable database all open;’ command to move them from mounted to open.

Now 12.1.0.2 allows you to save a state in advance using ‘alter pluggable database xxx save state’ which does seem a step forward

However why would the default not be to start all the pluggable databases (or services as they are seen) not leave them in a mounted state. Obviously Oracle have thought about this and changed the trigger method, maybe due to customer feedback but I wonder why they have not gone the whole hog and started the services automatically.

I would much prefer to have the default to be up and running rather than relying on the fact that I have saved the state previously

I did get some interesting and very helpful responses. Jared Still made a couple of good points. The first being that the opening time for all the pluggable databases might be very long if you had 300 of them. That blew my mind a little and I must admit that I had considered scenarios where you might have half a dozen maximum, not into the hundreds.

I did a little test on a virtual 2 CPU, 16Gb server, already loaded with 6 running non container databases. I created 11 pluggables (I have created a new word there) from an existing one – each one took less than 2 minutes

create pluggable database JH3PDB from JH2PDB;
create pluggable database JH4PDB from JH2PDB;
create pluggable database JH5PDB from JH2PDB;
create pluggable database JH6PDB from JH2PDB;
create pluggable database JH7PDB from JH2PDB;
create pluggable database JH8PDB from JH2PDB;
create pluggable database JH9PDB from JH2PDB;
create pluggable database JH10PDB from JH2PDB;
create pluggable database JH11PDB from JH2PDB;
create pluggable database JH12PDB from JH2PDB;
create pluggable database JH13PDB from JH2PDB;

 

They all default to the MOUNTED state so I then opened them all

SQL> alter pluggable database all open

Elapsed: 00:06:23.54

SQL> select con_id,dbid,NAME,OPEN_MODE from v$pdbs;
   CON_ID       DBID NAME                           OPEN_MODE
---------- ---------- ------------------------------ ----------
         2   99081426 PDB$SEED                      READ ONLY
         3 3520566857 JHPDB                         READ WRITE
         4 1404400467 JH1PDB                         READ WRITE
         5 1704082268 JH2PDB                         READ WRITE
         6 3352486718 JH3PDB                         READ WRITE
         7 2191215773 JH4PDB                         READ WRITE
         8 3937728224 JH5PDB                         READ WRITE
         9 731805302 JH6PDB                         READ WRITE
       10 1651785020 JH7PDB                         READ WRITE
       11 769231648 JH8PDB                         READ WRITE
       12 3682346625 JH9PDB                         READ WRITE
       13 2206923020 JH10PDB                       READ WRITE
       14 281114237 JH11PDB                       READ WRITE
       15 2251469696 JH12PDB                       READ WRITE
       16 260312931 JH13PDB                       READ WRITE

 

So 6:30 to open 11 PDBs might lead to a very long time for a very large number. That really answered the question I had asked. However there were more valuable nuggets to come.

Stefan Koehler pointed to a OTN community post where he advocated that the new (12.1.0.2) ‘save state’ for PDBs should also be extended to PDB services so that the service is started when the PDB is opened rather than having to use a custom script or a trigger. That seems a very reasonable proposal to me and will get my vote

Jared had an enhancement idea, instead of having a saved state, which I must admit is a bit messy, then why not a pdb control table with a START_MODE column?

Possible values

– NEVER:  Never open the pdb at startup

– ALWAYS: Always start this pdb.

– ON_DEMAND:  Start this pdb when someone tries to connect to it.

And then some mechanism to override this.

‘startup open cdb nomount nopdb’ for instance.

It does sound an interesting idea, especially  the ON_DEMAND option. I would have thought that if you were thinking along those lines a logical extension might be to auto unmount PDBs when they have not  been used for a while, again controllable by a table.

 


Creating standby database inc DG Broker and 12c changes

$
0
0

I thought I would refresh my knowledge of creating a standby database and at the same time include some DataGuard Broker configuration which also throws in some changes that came along with 12c

Overview

Database Name QUICKIE host server 1 ASM disk

Database Name STAN host server 2 ASM disk

Create a standby database STAN using ACTIVE DUPLICATE from the source database QUICKIE

 

QUICKIE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = QUICKIE)
)
)

STAN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = STAN)
)

server2 – listener.ora – note I have selected 1524 as that port is not currently in use and I do not want to interfere with any existing databases

 

LISTENERCLONE =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
)
)

(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = STAN)
(ORACLE_HOME = /app/oracle/product/12.1.0.2/dbhome_1)
(SID_NAME = STAN)
)
)

SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF

server2 – tnsnames.ora

STAN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = STAN)
)
)

LISTENERCLONE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
)
(CONNECT_DATA =
(SERVICE_NAME = STAN)
)

 

  1. Start clone listener on server2

lsnrctl start LISTENERCLONE

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 18-MAY-2015 09:19:27

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:

/app/oracle/product/12.1.0.2/grid/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias

Attempting to contact (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = server1)(PORT = 1524)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = STAN)))

OK (10 msec)

 

  1. Create a pfile on server2 – $ORACLE_HOME/dbs/initSTAN.ora

 

db_unique_name=STAN

compatible='12.1.0.2'

db_name='QUICKIE’

local_listener='server2:1524'

 

 

  1. Create password file for STAN (use SOURCE DB SYS password)

 

orapwd file=orapwQUICKIE password=pI7KU4ai

 

or copy the source passwd file

Create standby logs on the primary database if they do not exist already:

alter database add standby logfile thread 1 group 4 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

alter database add standby logfile thread 1 group 5 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

alter database add standby logfile thread 1 group 6 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

alter database add standby logfile thread 1 group 7 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

 

 

  1. startup database in nomount on standby server
[oracle@server2][STAN] <strong>$sysdba</strong>

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jul 22 15:09:28 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount

ORACLE instance started.

 

  1. login to RMAN console on source server via
rman target sys/pI7KU4ai auxiliary sys/pI7KU4ai@STAN<

connected to target database: QUICKIE (DBID=4212874924)

connected to auxiliary database: QUICKIE (not mounted)

 

  1. run restore script

 

run {

allocate channel ch1 type disk;

allocate channel ch2 type disk;

allocate auxiliary channel aux1 type disk;

allocate auxiliary channel aux2 type disk;

duplicate target database for standby from active database

SPFILE

set db_unique_name='STAN'

set audit_file_dest='/app/oracle/admin/STAN/adump'

set cluster_database='FALSE'

set control_files='+DATA/STAN/control01.ctl','+FRA/STAN/control02.ctl'

set db_file_name_convert='QUICKIE','STAN'

set local_listener='server2:1522'

set log_file_name_convert='QUICKIE','STAN'

set undo_tablespace='UNDOTBS1'

set audit_trail='DB'

nofilenamecheck;

}

 

If you get the error RMAN-05537: DUPLICATE without TARGET connection when auxiliary instance is started with spfile cannot use SPFILE clause then either remove the SPFILE parameter from the RMAN duplicate line above or start the STAN database with a parameter file not a spfile.

 

In 12c it seems to create a spfile after starting with an init.ora file unless you use the syntax

startup nomount pfile=’/app/oracle/product/12.1.0.2/dbhome_1/dbs/spfileSTAN.ora’

 

I also got an error around DB_UNIQUE_NAME which is new in 12c. This is because the standby existed previously (as I re-tested my instructions for this document) and it creates a HAS /CRS resource for the database name

</pre>
RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of Duplicate Db command at 07/22/2015 15:45:57

RMAN-05501: aborting duplication of target database

RMAN-03015: error occurred in stored script Memory Script

RMAN-03009: failure of sql command on clone_default channel at 07/22/2015 15:45:57

RMAN-11003: failure during parse/execution of SQL statement: alter system set db_unique_name = 'STAN' comment= '' scope=spfile

ORA-32017: failure in updating SPFILE
<pre>ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

 

The fix is to remove that resource

srvctl remove database -d STAN

 

 

  1. Set parameters back on primary and standby and restart the databases to ensure that they are picked up
alter system reset log_archive_dest_1;

alter system reset log_archive_dest_2;

Set parameter on standby

alter system set local_listener= ‘server2:1522’ scope=both;

 

 

  1. Dataguard broker configuration created at this point. Run from primary, although can be done on either

 

Sometimes it is best to stop/start the DG Broker if it has already been created – after deleting the dr files as well

alter system set dg_broker_start=FALSE;

alter system set dg_broker_start=TRUE;

dgmgrl /

create configuration 'DGConfig1' as primary database is 'QUICKIE' connect identifier is QUICKIE;

Configuration "DGConfig1" created with primary database "QUICKIE"

add database 'STAN' as connect identifier is 'STAN' maintained as physical;

Database "STAN" added

edit database 'QUICKIE' set property 'DGConnectIdentifier'='QUICKIE';

edit database 'STAN' set property 'DGConnectIdentifier'='STAN';

The next 2 commands are required if you are not using Port 1521.

Assuming you are not using Oracle Restart (which is now deprecated anyway) you also require the static entries to be defined in your listener.ora file STAN_DGMGRL and QUICKIE_DGMGRL in this case

edit database 'STAN' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server2)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=STAN_DGMGRL)(INSTANCE_NAME=STAN)(SERVER=DEDICATED)))';

edit database 'QUICKIE' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server1)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=QUICKIE_DGMGRL)(INSTANCE_NAME=QUICKIE)(SERVER=DEDICATED)))';
;

The below entries seemed to be picked up by default but it is worth checking and correcting with the commands below if necessary.

;

1edit database 'QUICKIE' set property 'StandbyFileManagement'='AUTO';

edit database 'QUICKIE' set property 'DbFileNameConvert'='STAN,QUICKIE';

edit database 'QUICKIE' set property 'LogFileNameConvert'='STAN,QUICKIE';

edit database 'STAN' set property 'StandbyFileManagement'='AUTO';

edit database 'STAN' set property 'DbFileNameConvert'='QUICKIE,STAN';

edit database 'STAN' set property 'LogFileNameConvert'='QUICKIE,STAN';

 

Now to start the broker

 

enable configuration

show configuration

show database verbose 'QUICKIE'

show database verbose 'STAN'

validate database verbose 'QUICKIE'

validate database verbose 'STAN'

 

 

Let’s try a switchover. You need to have SUCCESS as the final line of show configuration before any switchover will work. You need to be connected as SYS/password in DGMGRL, not using DGMGRL /. The latter uses OS authentication and the former is database authentication.

DGMGRL> switchover to 'STAN';

Performing switchover NOW, please wait...

Operation requires a connection to instance "STAN" on database "STAN"

Connecting to instance "STAN"...

Connected as SYSDBA.

New primary database "STAN" is opening...

Oracle Clusterware is restarting database "QUICKIE" ...

Switchover succeeded, new primary is "STAN"

&nbsp;

However when switching back the new primary QUICKIE opened but STAN hung

New primary database "QUICKIE" is opening...

Oracle Clusterware is restarting database "STAN" ...

shut down instance "STAN" of database "STAN"

start up instance "STAN" of database "STAN"

 

The database startup has hung and eventually times out. This is an issue around Oracle Restart which is now deprecated anyway

On the primary we can see a configuration for QUICKIE but there is not one on the standby for STAN

$srvctl config database -d STAN

PRCD-1120 : The resource for database STAN could not be found.

PRCR-1001 : Resource ora.stan.db does not exist

 

srvctl add database –d STAN –oraclehome ‘/app/oracle/product/12.1.0.2/dbhome_1’ –role ‘PHYSICAl_STANDBY’

 

Re-run the switchover and all should be well.

 

 

 

 

 

 

 

 

 


Migrating tablespaces across Endian platforms

$
0
0

This is a set of posts about migrating a database from one endian platform to another.

The long-term intention is to move a large (10Tb) 11.1.0.7 database on HP-UX to an OEL Linux server with minimum outage so that will include a database upgrade as well.

This first post is about migrating a self-contained set of schemas using transportable tablespace.

The HP-UX 11.1.0.7 database is called HPUXDB and it was created through dbca with the sample schemas created.. The target database is 11.2.0.4 on OEL 5.8

I already have an 11.2.0.4 installation on the OEL 5.8 server and the target database is called L18

Let’s check the endian versions

 

SELECT PLATFORM_ID, PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM;

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
4 HP-UX IA (64-bit)
13 Linux x86 64-bit&nbsp;&nbsp; Little

 

Set the tablespace EXAMPLE read only

 

 alter tablespace example read only;

Tablespace altered.

select file_name from dba_data_files;

+DATA/hpuxdb/datafile/example.352.886506805

EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list=&amp;amp;amp;amp;amp;gt;'EXAMPLE', incl_constraints =&amp;amp;amp;amp;amp;gt;TRUE);

PL/SQL procedure successfully completed.

SELECT * FROM transport_set_violations;

no rows selected

Export the data using the keywords transport_tablespaces

 

expdp directory=data_pump_dir transport_tablespaces=example dumpfile=hpux.dmp logfile=hpux.log Starting &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;SYS&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;.&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;SYS_EXPORT_TRANSPORTABLE_01&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;: /******** AS SYSDBA

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCACT_INSTANCE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCDEPOBJ
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table SYS.SYS_EXPORT_TRANSPORTABLE_01 successfully
loaded/unloaded

******************************************************************************

Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is: /app/oracle/admin/HPUXDB/dpdump/hpux.dmp

******************************************************************************

Datafiles required for transportable tablespace EXAMPLE: +DATA/hpuxdb/datafile/example.352.886506805

Job SYS_EXPORT_TRANSPORTABLE_01 successfully completed at 13:47:55

sftp the export dump file to the target server – I am using the data_pump_dir directory again.

You also need to copy the datafile(s) of the tablespaces you are migrating.

Mine is on ASM – switch to the ASM database and run asmcmd. I am copying out to /tmp and then copying it across to the target server on /tmp for now

 

ASMCMD> cp ‘+DATA/hpuxdb/datafile/example.352.886506805’ ‘/tmp/example.dbf’

copying +DATA/hpuxdb/datafile/example.352.886506805 -> /tmp/example.dbf

ASMCMD>

 

sftp> cd /tmp

sftp> put /tmp/example.dbf /tmp/example.dbf

Uploading /tmp/example.dbf to /tmp/example.dbf

/tmp/example.dbf                                                         100% 100MB 11.1MB/s 11.8MB/s   00:09

Max throughput: 13.9MB/s

 

Create users who own objects in the tablespace to be migrated. Note that their default tablespace (EXAMPLE) does not exist as yet.

create user sh identified by sh;

grant connect,resource to sh;
grant unlimited tablespace to sh;
grant create materialized view to sh;

create user oe identified by oe;
grant connect,resource to oe;
grant unlimited tablespace to oe;

create user hr identified by hr;
grant connect,resource to hr;
grant unlimited tablespace to hr;

create user ix identified by ix;
grant connect,resource to ix;
grant unlimited tablespace to ix;

create user pm identified by pm;
grant connect,resource to pm;
grant unlimited tablespace to pm;

create user bi identified by bi;
grant connect,resource to bi;
grant unlimited tablespace to bi;

 

CONVERT TABLESPACE EXAMPLE

TO PLATFORM ‘Linux x86 64-bit’

FORMAT=’/tmp/%U’;

 

$rmant

Recovery Manager: Release 11.1.0.7.0 - Production on Mon Aug 3 13:54:42 2015

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: HPUXDB (DBID=825022061)
using target database control file instead of recovery catalog

 

RMAN> CONVERT TABLESPACE EXAMPLE
TO PLATFORM 'Linux x86 64-bit'
FORMAT='/tmp/%U';

Starting conversion at source at 2015-08-03:13:56:18
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00005 name=+DATA/hpuxdb/datafile/example.352.886506805
converted datafile=/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03
Finished conversion at source at 2015-08-03:13:56:21

sftp> put /tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2 Uploading /tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2 to /tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2 impdp directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles=’/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2′

Import: Release 11.2.0.4.0 - Production on Mon Aug 3 13:24:06 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management and OLAP options
Master table SYS.SYS_IMPORT_TRANSPORTABLE_01 successfully loaded/unloaded

Starting SYS.SYS_IMPORT_TRANSPORTABLE_01 /******** AS SYSDBA directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles=/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_02qdm380

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK

Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC

Processing object type TRANSPORTABLE_EXPORT/TABLE

Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT

ORA-39083: Object type OBJECT_GRANT failed to create with error:

ORA-01917: user or role 'BI' does not exist

Failing sql is:

GRANT SELECT ON OE.CUSTOMERSTO BI;

ORA-39083: Object type OBJECT_GRANT failed to create with error:

ORA-01917: user or role 'BI' does not exist

Failing sql is:

GRANT SELECT ON OE.WAREHOUSES TO BI;

ORA-31685: Object type MATERIALIZED_VIEW:SH.CAL_MONTH_SALES_MV; failed due to insufficient privileges. Failing sql is:

CREATE MATERIALIZED VIEW SH.CAL_MONTH_SALES_MV (CALENDAR_MONTH_DESC, DOLLARS) USING (CAL_MONTH_SALES_MV(9, 'HPUXDB', 2, 0, 0, SH.TIMES, '2015-07-31 11:55:08', 8, 70899, '2015-07-31 11:55:09', '', 1, '0208', 822277, 0, NULL, 1, "SH", "SALES", '2015-07-31 11:55:08', 33032, 70841, '2015-07-31 11:55:09', '', 1, '88', 822277, 0, NULL), 1183809, 9, ('1950-01-01 12:00:00', 21,

Dropped the users in the example schema – dropped the example tablespace. Added the user and create a BI user and granted CREATE MATERIALIZED VIEW to SH.  Then re-run the import

 

impdp directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles='/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2'

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, Automatic Storage Management and OLAP options

Master table SYS SYS_IMPORT_TRANSPORTABLE_01 successfully loaded/unloaded

Starting SYS.SYS_IMPORT_TRANSPORTABLE_01/******** AS SYSDBA directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles=/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK

Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC

Processing object type TRANSPORTABLE_EXPORT/TABLE

Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type TRANSPORTABLE_EXPORT/INDEX

Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT

Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS

Processing object type TRANSPORTABLE_EXPORT/COMMENT

Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT

Processing object type TRANSPORTABLE_EXPORT/TRIGGER

Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX

Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS

Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS

Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE

Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX

Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX

Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCACT_INSTANCE

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCDEPOBJ

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

Job SYS.SYS_IMPORT_TRANSPORTABLE_01 completed at Mon Aug 3 14:04:42 2015 elapsed 0 00:00:52

 

Post Migration

alter tablespace example read write;

alter the users created earlier to have their default tablespace as example. The imported objects are in EXAMPLE but new ones will go to USERS or whichever the defult tablespace is set to.

Remove all dmp files and interim files that were created (in /tmp on both servers in my demo).

Migrating directly from ASM to ASM

For ease in the example above I migrated out from ASM to a filesystem. Here I will demonstrate a copy going from ASM to ASM.

 

create user test identified by xxxxxxxxx default tablespace example1;

User created.

grant connect, resource to test;

Grant succeeded.

grant unlimited tablespace to test;

Grant succeeded.

connect test/xxxxxxxx

Connected.

create table example1_data tablespace example1 as select * from all_objects;

Table created.

select segment_name, tablespace_name from user_segments;

SEGMENT_NAME TABLESPACE_NAME

——————————

EXAMPLE1_DATA EXAMPLE1

 

EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list = &gt; 'EXAMPLE1', incl_constraints =&amp;amp;gt; TRUE);

PL/SQL procedure successfully completed.

SELECT * FROM transport_set_violations;

no rows selected

alter tablespace4 example1 read only;

Now export the metadata

expdp …..

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK

Processing object type TRANSPORTABLE_EXPORT/TABLE

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

Master table “SYS”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded

******************************************************************************

Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:

/app/oracle/admin/HPUXDB/dpdump/hpux.dmp

******************************************************************************

Datafiles required for transportable tablespace EXAMPLE1:

+DATA/hpuxdb/datafile/example1.1542.887621739

Job SYS.SYS_EXPORT_TRANSPORTABLE_01 successfully completed at 09:48:33

 

Now move the datafile to the target ASM environment. You can copy the dmp file in the same manner if you wish

ASMCMD [+] > cp –port 1522 sys@server.+ASM:+DATA/hpuxdb/datafile/example1.1542.887621739 +DATA/linuxdb/example1

Enter password: **********

copying server:+DATA/hpuxdb/datafile/example1.1542.887621739 -> +DATA/linuxdb/example1

 

The only problem with copying from ASM to ASM is that the physical file is not located in the directory where you want to copy. It is actually stored in the +DATA/ASM/DATAFILE directory (rather than the LINUXDB directory:

 

ASMCMD [+data/linuxdb] > ls -l
Type           Redund  Striped  Time             Sys  Block_Size  Blocks     Bytes     Space  Name
                                                 Y                                            CONTROLFILE/
                                                 Y                                            DATAFILE/
                                                 Y                                            ONLINELOG/
                                                 Y                                            PARAMETERFILE/
                                                 Y                                            TEMPFILE/
DATAFILE       UNPROT  COARSE   AUG 13 13:00:00  N          8192    6401  52436992  53477376  example1 => +DATA/ASM/DATAFILE/example1.362.887637415
PARAMETERFILE  UNPROT  COARSE   AUG 12 22:00:00  N           512       5      2560   1048576  spfileLINUXDB.ora => +DATA/LINUXDB/PARAMETERFILE/spfile.331.886765295

Move it to the correct folder using the cp command within asmcmd then rm from the original folder


Using DBMS_PARALLEL_EXECUTE

$
0
0

DBMS_PARALLEL_EXECUTE

We have a number of updates to partitioned tables that are run from within pl/sql blocks which have either an execute immediate ‘alter session enable parallel dml’ or execute immediate ‘alter session force parallel dml’ in the same pl/sql block. It appears that the alter session is not having any effect as we are ending up with non-parallel plans. When the same queries are run outside pl/sql either in sqlplus or sqldeveloper sessions the updates are given a parallel plan. We have a simple test pack that we have used to prove that this anomaly takes place at 11.1.0.7 (which is the version of the affected DB) and at 12.1.0.2 (to show that it is not an issue with just that version).
It appears that the optimizer is not aware of the fact that the alter session has been performed. We have also tried performing the alter session statement outside of the pl/sql block i.e. in native sqlplus environment, that also does not result in a parallel plan

Let me show a test case

Firstly we tried anonymous pl/sql block with an execute immediate for setting force dml for the session:

create table target_table

(

c1     number(6),

c2 varchar2(1024)

)

partition by range (c1)

(

partition p1 values less than (2),

partition p2 values less than (3),

partition p3 values less than (100)

)

;

create unique index target_table_pk on target_table (c1, c2) local;

alter table target_table add constraint target_table_pk primary key (c1, c2) using index;

create table source_table

(      c1     number(6),

c2       varchar2(1024)

);

insert /*+append */ into source_table (select distinct 2, owner||object_type||object_name from dba_objects);

commit;

select count(*) from source_table;

begin

execute immediate 'alter session force parallel dml';

insert /*+append parallel */ into target_table

select * from source_table;

end;

/

 

This load generates a serial plan

-----------------------------------------------------------------------------------
| Id | Operation         | Name         | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | INSERT STATEMENT   |             |       |       |   143 (100)|         |
|   1 | LOAD AS SELECT   |             |       |       |           |         |
|   2 |   TABLE ACCESS FULL| SOURCE_TABLE | 80198 |   40M|   143   (1)| 00:00:02 |
-----------------------------------------------------------------------------------

To see what the plan should look like if parallel dml was being used in a sqlplus session:

 

truncate table target_table;

alter session force parallel dml;

INSERT /*+append sqlplus*/ INTO TARGET_TABLE SELECT * FROM SOURCE_TABLE;

—————————————————————————————————————————

| Id | Operation                   | Name         | Rows | Bytes | Cost (%CPU)| Time     |   TQ |IN-OUT| PQ Distrib |

-------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT           |             |       |       |     5 (100)|         |       |     |           |
|   1 | PX COORDINATOR             |             |       |       |           |         |       |     |           |
|   2 |   PX SEND QC (RANDOM)       | :TQ10002     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,02 | P->S | QC (RAND) |
|   3 |   INDEX MAINTENANCE       | TARGET_TABLE |       |       |           |         | Q1,02 | PCWP |           |
|   4 |     PX RECEIVE            |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,02 | PCWP |           |
|   5 |     PX SEND RANGE         | :TQ10001     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,01 | P->P | RANGE     |
|   6 |       LOAD AS SELECT       |              |       |       |           |         | Q1,01 | PCWP |           |
|   7 |       PX RECEIVE           |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,01 | PCWP |           |
|   8 |         PX SEND RANDOM LOCAL| :TQ10000     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | P->P | RANDOM LOCA|
|   9 |         PX BLOCK ITERATOR |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | PCWC |           |
|* 10 |           TABLE ACCESS FULL | SOURCE_TABLE | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | PCWP |           |

 

truncate table target_table;

 

 

Oracle offered us two pieces of advice

  • Use a PARALLEL_ENABLE clause through a function
  • Use the DBMS_PARALLEL_EXECUTE package to achieve parallelism. (this is only available 11.2 onwards)

They also referred us to BUG 12734028 – PDML ONLY FROM PL/SQL DOES NOT WORK CORRECTLY
How To Enable Parallel Query For A Function? ( Doc ID 1093773.1 )

We did try the first option, the function but that failed and we did not move forward on that, concentrating on the DBMS_PARALLEL_EXECUTE package.

So the rest of this blog is around how our testing went and what results we achieved.

Starting with the same source_table contents, and an empty target table, a task needs to be created:

 

BEGIN

DBMS_PARALLEL_EXECUTE.create_task (task_name =&amp;amp;amp;amp;gt;DBMS_PARALLEL_EXECUTE.generate_task_name);

END;

/

The task name could also be set to manually, but this method does not lend itself to being proceduralised, as the name needs to be unique.

To determine the identifier for the task:

 

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE status = 'CREATED';

 

TASK_NAME                                CHUNK_TYPE   STATUS

—————————————- ———— ——————-

TASK$_141                               UNDELARED   CREATED

 

Now the source_table data set must be split into chunks in order to set up discrete subsets of data that will be handled by the subordinate tasks. This demo will split the table by rowid, but it can also be split using block counts or using the values contained in a specific column in the table.

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   =&amp;amp;amp;amp;gt; 'TASK$_141',-

table_owner = &amp;amp;gt;'SYS',-

table_name = &amp;amp;gt; 'SOURCE_TABLE',-

by_row     = &amp;amp;gt; TRUE,-

chunk_size = &amp;amp;gt; 20000)

 

 

Note that there are three procedures that can be used to create chunks

PROCEDURE CREATE_CHUNKS_BY_NUMBER_COL

PROCEDURE CREATE_CHUNKS_BY_ROWID

PROCEDURE CREATE_CHUNKS_BY_SQL

 

As the chunk size is set to 20000 , and the table contains just over 80000 rows, one would expect 5 chunks to be created:

col CHUNK_ID form 999

col TASK_NAME form a10

col START_ROWID form a20

col end_rowid form a20

set pages 60

select CHUNK_ID, TASK_NAME, STATUS, START_ROWID, END_ROWID from user_parallel_execute_chunks where TASK_NAME = 'TASK$_27' order by 1;

can we count rows in each chunk id? Desc the table

 

CHUNK_ID TASK_NAME               STATUS               START_ROWID       END_ROWID

———- ———————— ——————– —————— ——————

142 TASK$_141               UNASSIGNED           AAA2sTAABAAAbtoAAA AAA2sTAABAAAbtvCcP

143 TASK$_141               UNASSIGNED           AAA2sTAABAAAbtwAAA AAA2sTAABAAAbt3CcP

144 TASK$_141               UNASSIGNED           AAA2sTAABAAAbt4AAA AAA2sTAABAAAbt/CcP

145 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSAAAA AAA2sTAABAAAdSHCcP

146 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSIAAA AAA2sTAABAAAdSPCcP

147 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSQAAA AAA2sTAABAAAdSXCcP

148 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSYAAA AAA2sTAABAAAdSfCcP

149 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSgAAA AAA2sTAABAAAdSnCcP

150 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSoAAA AAA2sTAABAAAdSvCcP

151 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSwAAA AAA2sTAABAAAdS3CcP

152 TASK$_141               UNASSIGNED           AAA2sTAABAAAdS4AAA AAA2sTAABAAAdS/CcP

153 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTAAAA AAA2sTAABAAAdTHCcP

154 TASK$_141               UNASSIGNED          AAA2sTAABAAAdTIAAA AAA2sTAABAAAdTPCcP

155 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTQAAA AAA2sTAABAAAdTXCcP

156 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTYAAA AAA2sTAABAAAdTfCcP

157 TASK$_141                UNASSIGNED           AAA2sTAABAAAdTgAAA AAA2sTAABAAAdTnCcP

158 TASK$_141               UNASSIGNED           AAA2sTAABAAAdUAAAA AAA2sTAABAAAdUxCcP

159 TASK$_141               UNASSIGNED           AAA2sTAABAAAdUyAAA AAA2sTAABAAAdVjCcP

160 TASK$_141               UNASSIGNED           AAA2sTAABAAAdVkAAA AAA2sTAABAAAdV/CcP

161 TASK$_141               UNASSIGNED           AAA2sTAABAAAdWAAAA AAA2sTAABAAAdWxCcP

162 TASK$_141               UNASSIGNED           AAA2sTAABAAAdWyAAA AAA2sTAABAAAdXjCcP

163 TASK$_141               UNASSIGNED           AAA2sTAABAAAdXkAAA AAA2sTAABAAAdX/CcP

164 TASK$_141               UNASSIGNED           AAA2sTAABAAAdYAAAA AAA2sTAABAAAdYxCcP

165 TASK$_141                UNASSIGNED           AAA2sTAABAAAdYyAAA AAA2sTAABAAAdZjCcP

166 TASK$_141               UNASSIGNED           AAA2sTAABAAAdZkAAA AAA2sTAABAAAdZ/CcP

167 TASK$_141               UNASSIGNED           AAA2sTAABAAAdaAAAA AAA2sTAABAAAdaxCcP

168 TASK$_141               UNASSIGNED           AAA2sTAABAAAdayAAA AAA2sTAABAAAdbjCcP

169 TASK$_141               UNASSIGNED           AAA2sTAABAAAdbkAAA AAA2sTAABAAAdb/CcP

 

28 rows selected.

Tests were run changing the chunk_size to 40000 and still 28 chunks were created.

Looking at the details of the main task in USER_PARALLEL_EXECUTE_TASKS:

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE task_name = 'TASK$_141';

TASK_NAME                               CHUNK_TYPE   STATUS

---------------------------------------- ------------ -------------------

TASK$_141                               ROWID_RANGE CHUNKED

It details that the task has been split into discrete ranges of data, and shows how these ranges were determined.

To execute the insert into the target table we create an anonymous pl/sql block, declare sql that is to be run with the addition of an additional predicate for a range of rowids:

 

The ranges are passed in from the ranges specified in the user_parallel_execute_chunks table

 

DECLARE

v_sql VARCHAR2(1024);

BEGIN

v_sql := 'insert /*+PARALLEL APPEND */ into target_table

select * from source_table

where rowid between :start_id and :end_id';

dbms_parallel_execute.run_task(task_name     = &amp;amp;gt; 'TASK$_141',

sql_stmt       = &gt; v_sql,

language_flag = &gt; DBMS_SQL.NATIVE,

parallel_level = &gt;; 5);

END;

/

PL/SQL procedure successfully completed.

Checking the contents of the target table:

select count(*) from target_table;

COUNT(*)

----------

86353

Now looking at the details of the main task in USER_PARALLEL_EXECUTE_TASKS:

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE task_name = 'TASK$_141';

TASK_NAME                               STATUS

---------------------------------------- -------------------

TASK$_141                               FINISHED

 

Looking at the user_parallel_execute_chunks entries after completion:

select CHUNK_ID, JOB_NAME, START_TS, END_TS

from user_parallel_execute_chunks

where TASK_NAME = 'TASK$_141' order by 2,3;

CHUNK_ID JOB_NAME             START_TS                       END_TS

---------- -------------------- ------------------------------ -------------------------------

142 TASK$_7410_1         30-JUL-15 02.34.31.672484 PM   30-JUL-15 02.34.31.872794 PM

144 TASK$_7410_1         30-JUL-15 02.34.31.877091 PM   30-JUL-15 02.34.32.204513 PM

146 TASK$_7410_1         30-JUL-15 02.34.32.209950 PM   30-JUL-15 02.34.32.331349 PM

148 TASK$_7410_1         30-JUL-15 02.34.32.335192 PM   30-JUL-15 02.34.32.528391 PM

150 TASK$_7410_1         30-JUL-15 02.34.32.533488 PM   30-JUL-15 02.34.32.570243 PM

152 TASK$_7410_1         30-JUL-15 02.34.32.575450 PM   30-JUL-15 02.34.32.702353 PM

154 TASK$_7410_1         30-JUL-15 02.34.32.710860 PM   30-JUL-15 02.34.32.817684 PM

156 TASK$_7410_1         30-JUL-15 02.34.32.828963 PM   30-JUL-15 02.34.32.888834 PM

158 TASK$_7410_1         30-JUL-15 02.34.32.898458 PM   30-JUL-15 02.34.33.493985 PM

160 TASK$_7410_1         30-JUL-15 02.34.33.499254 PM   30-JUL-15 02.34.33.944356 PM

162 TASK$_7410_1         30-JUL-15 02.34.33.953509 PM   30-JUL-15 02.34.34.366352 PM

164 TASK$_7410_1         30-JUL-15 02.34.34.368668 PM   30-JUL-15 02.34.34.911471 PM

166 TASK$_7410_1         30-JUL-15 02.34.34.915205 PM   30-JUL-15 02.34.35.524515 PM

168 TASK$_7410_1         30-JUL-15 02.34.35.527515 PM   30-JUL-15 02.34.35.889198 PM

169 TASK$_7410_1         30-JUL-15 02.34.35.889872 PM   30-JUL-15 02.34.35.890412 PM

143 TASK$_7410_2         30-JUL-15 02.34.31.677235 PM   30-JUL-15 02.34.32.129181 PM

145 TASK$_7410_2         30-JUL-15 02.34.32.135013 PM   30-JUL-15 02.34.32.304761 PM

147 TASK$_7410_2         30-JUL-15 02.34.32.310140 PM   30-JUL-15 02.34.32.485545 PM

149 TASK$_7410_2         30-JUL-15 02.34.32.495971 PM   30-JUL-15 02.34.32.550955 PM

151 TASK$_7410_2         30-JUL-15 02.34.32.558335 PM   30-JUL-15 02.34.32.629274 PM

153 TASK$_7410_2         30-JUL-15 02.34.32.644917 PM   30-JUL-15 02.34.32.764337 PM

155 TASK$_7410_2         30-JUL-15 02.34.32.773029 PM   30-JUL-15 02.34.32.857794 PM

157 TASK$_7410_2         30-JUL-15 02.34.32.864875 PM   30-JUL-15 02.34.32.908799 PM

159 TASK$_7410_2         30-JUL-15 02.34.32.913982 PM   30-JUL-15 02.34.33.669704 PM

161 TASK$_7410_2         30-JUL-15 02.34.33.672077 PM   30-JUL-15 02.34.34.128170 PM

163 TASK$_7410_2         30-JUL-15 02.34.34.140102 PM   30-JUL-15 02.34.34.624627 PM

165 TASK$_7410_2         30-JUL-15 02.34.34.628145 PM   30-JUL-15 02.34.35.431037 PM

167 TASK$_7410_2         30-JUL-15 02.34.35.433282 PM   30-JUL-15 02.34.35.885741 PM
28 rows selected.

 

From these details there appears to be only two jobs processing the sub-tasks even though a parallel_level of 5 was specified.   Why is that? Not enough data to break it up more? Back end resourcing?????

Looking at the details of the query in a tkprof listing of one of the jobs’ trace files you can see that each of the jobs is executing the same plan that was generated with the original pl/sql: , however they are running two tasks in parallel

 

insert /*+PARALLEL APPEND */ into target_table
           select * from
source_table
             where rowid between :start_id and :end_id


call     count       cpu   elapsed       disk     query   current       rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse       1     0.00       0.00         0         0         0           0
Execute     6     0.26       3.03         4       599       8128       17849
Fetch        0     0.00       0.00         0         0         0           0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total       7     0.26       3.03         4       599       8128       17849

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS   (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
         0         0         0 LOAD AS SELECT (cr=59 pr=0 pw=8 time=92689 us)
     1240       1240       1240   FILTER (cr=13 pr=0 pw=0 time=501 us)
     1240       1240       1240   TABLE ACCESS BY ROWID RANGE SOURCE_TABLE (cr=13 pr=0 pw=0 time=368 us cost=42 size=122892 card=228)

 

I think that I could do more testing in this area and try and determine answers to the following questions

 

Why is the parallel parameter not appearing to work?

Does it depend on data volumes or back-end resources?

 

What I was interested in seeing was what was the breakdown of chunks by row count – was it an even split?

select a.chunk_id, count(*)

from user_parallel_execute_chunks a,

source_table b

where a.TASK_NAME = 'TASK$_107'

and b.rowid between a.START_ROWID and a.END_ROWID

group by a.chunk_id

order by 1;

  CHUNK_ID   COUNT(*)
---------- ----------
       108       1407
       109       1233
       110       1217
       111       1193
       112       1192
       113       1274
       114       1213
       115       1589
       116       1191
       117       1226
       118       1190
       119       1201
       120       1259
       121       1273
       122       1528
       123       1176
       124      15874
       125       4316
       126      15933
       127       4283
       128       9055

Reasonably even with most chunks selecting around 1200 rows each but 2 chunks had 15K in each so not that well chunked.

Repeating the process but chunking by blocks this time produced the following results

 

 

SQL> select task_name, chunk_id, dbms_rowid.rowid_block_number(start_rowid) Start_Id,

2 dbms_rowid.rowid_block_number(end_rowid) End_Id,

3 dbms_rowid.rowid_block_number(end_rowid) – dbms_rowid.rowid_block_number(start_rowid) num_blocks

4 from user_parallel_execute_chunks order by task_name, chunk_id;

 

TASK_NAME             CHUNK_ID   START_ID     END_ID NUM_BLOCKS

——————– ———- ———- ———- ———-

 

TASK$_133                   134     70096     70103         7

TASK$_133                   135     70104     70111         7

TASK$_133                   136     70112     70119         7

TASK$_133                   137     70120     70127         7

TASK$_133                   138     70128     70135         7

TASK$_133                   139     70136     70143         7

TASK$_133                   140     73344     73351        7

TASK$_133                   141     73352     73359         7

TASK$_133                   142     73360     73367         7

TASK$_133                   143     73368     73375         7

TASK$_133                   144     73376     73383        7

TASK$_133                   145     73384     73391         7

TASK$_133                   146     73392     73399         7

TASK$_133                   147     73400     73407         7

TASK$_133                   148     73408     73415         7

TASK$_133                   149     73416     73423         7

TASK$_133                   150     73472     73521         49

TASK$_133                   151     73522     73571         49

TASK$_133                   152     73572     73599         27

TASK$_133                   153     73600     73649         49

TASK$_133                   154     73650     73699         49

TASK$_133                   155     73700     73727         27

TASK$_133                   156     73728      73777         49

TASK$_133                   157     73778     73827         49

TASK$_133                   158     73828     73855         27

 

So again not very well split. Perhaps there needs to be more data to make it worthwhile.

 

Error handling:  

 

Finally we can have a look at how it handles errors. To save the reader time the simple answer is ‘not very well’.

Let’s force a duplicate key error to show how the package handles errors. The original 80000+ rows are left in the target_table, and the same set of data will be reinserted – which will cause a unique constraint violation for the PK:

 

BEGIN

DBMS_PARALLEL_EXECUTE.create_task (task_name => DBMS_PARALLEL_EXECUTE.generate_task_name);

END;

/

 

PL/SQL procedure successfully completed.

 

 

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

where status = ‘CREATED’;

 

TASK_NAME                               CHUNK_TYPE   STATUS

—————————————- ———— ——————-

TASK$_170                               UNDELARED   CREATED

 

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => ‘TASK$_170’,-

table_owner => ‘SYS’,-

table_name => ‘SOURCE_TABLE’,-

by_row     => TRUE,-

chunk_size => 40000)

DECLARE

v_sql VARCHAR2(1024);

BEGIN

v_sql := ‘insert /*+PARALLEL APPEND */ into target_table

select * from source_table

where rowid between :start_id and :end_id’;

 

dbms_parallel_execute.run_task(task_name     => ‘TASK$_170’,

sql_stmt       => v_sql,

language_flag => DBMS_SQL.NATIVE,

parallel_level => 10);

END;

/

 

PL/SQL procedure successfully completed.

 

select count(*) from target_table;

 

COUNT(*)

———-

86353

No error is thrown by the run_task call, but the number of rows has not increased. Checking the status of the task shows that there was indeed an error:

 

select task_name, status from user_parallel_execute_tasks where task_name = ‘TASK$_170’);

 

TASK_NAME                               STATUS

—————————————- ——————-

TASK$_170                               FINISHED_WITH_ERROR

The error details are given in the user_parallel_execute_chunks view:

 

select TASK_NAME, ERROR_MESSAGE, STATUS

from user_parallel_execute_chunks

where TASK_NAME = ‘TASK$_170’ order by 2

 

TASK_NAME       ERROR_MESSAGE                                               STATUS

————— ———————————————————— —————————————-

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170      ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170                                                                   PROCESSED

TASK$_170                                                                   PROCESSED

 

28 rows selected.

SQL> /

 

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => ‘TASK$_133’,-

table_owner => ‘SYS’,-

table_name => ‘SOURCE_TABLE’,-

by_row     => FALSE,-

chunk_size => 20);



Using SYSBACKUP in 12c with a media manager layer

$
0
0

I think most large sites who have multiple support teams are aware of how the phrase “Segregation of Duties” is impacting the DBA world. The basic principle, that one user should not be able to, for instance, add a user, grant it privileges, let the user run scripts and then drop the user and remove all log files is a sound one and cannot be argued with.

With the release of 12c Oracle e added three new users to perform administrative tasks. Each user as a corresponding privilege with the same name as the user, which is a bit confusing.

SYSBACKUP – for RMAN backup and recovery work

SYSDG –  to manage DataGuard operations

SYSKM – to manage activities involving ‘key management’ including wallets and Database Vault

I have no real experience of key management so cannot comment on that. I do fail to see which type of user would be allowed to manage a DG setup and yet not be allowed to perform other DBA work on the databases, however it probably does mean that any requirement to login as ‘sysdba’ is now reduced which can only be a good thing.

 

The SYSBACKUP user is a really good idea and has been a long-time coming

The privileges it has, along with select on many sys views are

STARTUP
SHUTDOWN
ALTER DATABASE
ALTER SYSTEM
ALTER SESSION
ALTER TABLESPACE
CREATE CONTROLFILE
CREATE ANY DIRECTORY
CREATE ANY TABLE
CREATE ANY CLUSTER
CREATE PFILE
CREATE RESTORE POINT (including GUARANTEED restore points)
CREATE SESSION
CREATE SPFILE
DROP DATABASE
DROP TABLESPACE
DROP RESTORE POINT (including GUARANTEED restore points)
FLASHBACK DATABASE
RESUMABLE
UNLIMITED TABLESPACE
SELECT ANY DICTIONARY
SELECT ANY TRANSACTION

One aspect I was keen to look at was if we could amend the connect string we use in our Media Manager Layer  – Commvault from Simpana from having to connect using a ‘user/password as sysdba’ string

Unfortunately at the moment there is no way of changing the connect string to use the user SYSBACKUP. Simpana will be releasing Version 11 sometime later this year which will have be able to interact with the SYSBACKUP user, however I am unclear as to whether the requirement  to connect as SYSDBA will be removed or not.

I am not aware of how other MMLs such as Networker, Netbackup or Data Protector have been updated to include the 12c changes and I am keen to find out.


Identifying database link usage

$
0
0

As part of ongoing security reviews I wanted to determine if all database links on production systems were in use. That is not very easy to do and this article is a listing of some of the options I have considered to get that information and how it is now possible from 11GR2 onwards.

The first option was to look and see if auditing can be used. The manual states “You can audit statements that refer to tables, views, sequences, standalone stored procedures or functions, and packages, but not individual procedures within packages. (See “Auditing Functions, Procedures, Packages, and Triggers” for more information about auditing these types of objects.)

You cannot directly audit statements that reference clusters, database links, indexes, or synonyms. However, you can indirectly audit access to these schema objects, by auditing the operations that affect the base table.”

So you could audit activities on a base table that a database link might utilise, probably via a synonym. However that would show all table usage but it would be very difficult to break it down to see if a database link had been involved.

On the assumption that the code has a call to “@db_link_name” you could probably trawl ASH data or v$sql to see if a reference is available. It would be more likely that a synonym would be in use and as we have said above, we cannot audit synonym usage but you could maybe find it in v$sql. Again very work intensive with no guaranteed return.

There has been an enhancement request in MoS since 2006 – search for Bug 5098260

Jared Still posted a routine, although he does not claim to be the original author,  which shows a link being actually used. However in reality that is not really a good way of capturing information across many systems unless you enable an excessive amount of tracing or monitoring across all systems. I have demoed usage of it below and it does work

I’ve created a DB link from SNAPCL1A to SNAPTM1. First I opened the DB link:

 

select sysdate from dual@snaptm1;
SYSDATE
---------
22-SEP-15

 

I can see my DB link being opened in v$dblink (in my own session):

select a.db_link, u.username, logged_on, open_cursors, in_transaction, update_sent

from v$dblink a, all_users u

where a.owner_id = u.user_id;
DB_LINK                        USERNAME                       LOG OPEN_CURSORS IN_ UPD
------------------------------ ------------------------------ --- ------------ --- ---
SNAPTM1                        SYS                            YES            0 YES NO

The following script can be used to see open DB link sessions (on both databases). It can be executed from any session and it will only show open DB links (that have not been committed, rolled back or manually closed/terminated on the origin database):

col origin for a30

col "GTXID" for a30

col lsession for a10

col username for a20

col waiting for a50

Select /*+ ORDERED */

substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10)      "ORIGIN",

substr(g.K2GTITID_ORA,1,35) "GTXID",

substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,

s2.username,

decode(bitand(ksuseidl,11),

1,'ACTIVE',

0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),

2,'SNIPED',

3,'SNIPED',

'KILLED'

) "State",

substr(w.event,1,30) "WAITING"

from  x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2

where  g.K2GTDXCB =t.ktcxbxba

and   g.K2GTDSES=t.ktcxbses

and  s.addr=g.K2GTDSES

and  w.sid=s.indx

and s2.sid = w.sid;

 

On the origin database:

 

ORIGIN                         GTXID                          LSESSION   USERNAME             State    WAITING
------------------------------ ------------------------------ ---------- -------------------- -------- --------------------------------------------------
teora01x-3762                  SNAPCL1A.cc76ea8a.7.32.983     125.5      SYS                  INACTIVE SQL*Net message from client

 

On the destination database:

 

ORIGIN                         GTXID                          LSESSION   USERNAME             State    WAITING
------------------------------ ------------------------------ ---------- -------------------- -------- --------------------------------------------------
teora01x-4065                  SNAPCL1A.cc76ea8a.7.32.983     133.599    SYSTEM               INACTIVE SQL*Net message from client

 

Now, I rollback my session on the origin database:

 

SQL> rollback;
Rollback complete.

If I query the v$dblink view, I still see my link there, but the transaction is closed now:

 

select a.db_link, u.username, logged_on, open_cursors, in_transaction, update_sent

from v$dblink a, all_users u

where a.owner_id = u.user_id  2    3  ;

 

DB_LINK                        USERNAME             LOG OPEN_CURSORS IN_ UPD
------------------------------ -------------------- --- ------------ --- ---
SNAPTM1                        SYS                  YES            0 NO  NO

The script will not return anything at this point:

SQL> Select /*+ ORDERED */

substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10)      "ORIGIN",

substr(g.K2GTITID_ORA,1,35) "GTXID",

substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,

s2.username,

decode(bitand(ksuseidl,11),

1,'ACTIVE',

0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),

2,'SNIPED',

3,'SNIPED',

'KILLED'

) "State",

substr(w.event,1,30) "WAITING"

from  x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2

where  g.K2GTDXCB =t.ktcxbxba

and   g.K2GTDSES=t.ktcxbses

and  s.addr=g.K2GTDSES

and  w.sid=s.indx

and s2.sid = w.sid;

no rows selected 

However since at least 11.2.0.3 Oracle have provided a better means of identifying database link usage after the event and not just during

Databases TST11204 and QUICKIE on different servers  -both have dblinks to each other –

TST11204

create user dblinktest identified by Easter2012 ;

grant create session, create database link to dblinktest;

SQL> connect dblinktest/xxxxxxxxxx

Connected.

SQL>  select * from test@quickie;

C1 C2

---------- -----

5 five

create database link TST11204 connect to dblinkrecd identified by xxxxxx using 'TST11204';

select * from test@TST11204

At this point we have made a connection  – lets see what we can find out about it. I would advise using the timestamp# column of aud$ to reduce the volume of data that has to be searched.

SQL> select userid, terminal, comment$text from sys.aud$ where comment$text like 'DBLINK%';

 

USERID  TERMINAL        COMMENT$TEXT
--------------------------------------------------------------------------------
                        DBLINKRECD DBLINK_INFO: (SOURCE_GLOBAL_NAME=QUICKIE.25386385)

This information is in both source and target databases

It will return the source of a database link session. Specifically, it returns a string of the form:

SOURCE_GLOBAL_NAME=dblink_src_global_name, DBLINK_NAME=dblink_name, SOURCE_AUDIT_SESSIONID=dblink_src_audit_sessionid

where:

dblink_src_global_name is the unique global name of the source database

dblink_name is the name of the database link on the source database

dblink_src_audit_sessionid is the audit session ID of the session on the source database that initiated the connection to the remote database using dblink_name.

So hopefully that might help in identifying if a database link is still in use or when it was last used and it can be used as another part of your security toolkit.

 


Manage a PSU into a CDB having several PDBS

$
0
0

I thought it would be a good idea to show how to apply PSU into the CDB and PDB that came with 12c. I start off with a quick reminder about how 11g worked and then move into examples of 12c

11G reminder

get the latest version of opatch

check for conflicts

opatch prereq  CheckConflictAgainstOHWithDetail -ph ./

start the downtime

Stop all the databases in the home (one node at a atime for RAC)

apply the patch

opatch apply

start all databases in the home

load SQL into the databases

@catbundle.sql psu apply

end of downtime (but remember to do the standby)

Example of 12c PSU process

Update opatch to latest version

Download and apply patch 6880880 to the oracle home

Check for conflicts with one-offs

Run the prereq check for conflicts:

[CDB2] oracle@localhost:~/12cPSU/19303936

$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./

Oracle Interim Patch Installer version 12.1.0.1.8
Copyright (c) 2015, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0.2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.2/oraInst.loc
OPatch version   : 12.1.0.1.8
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/opatch2015-09-22_12-12-54PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

No conflicts should be reported.

Stop all dbs in the home (one node at a time for RAC)

If you are using a Data Guard Physical Standby database, you must install this patch on both the primary database and the physical standby database, as described by My Oracle Support Document 278641.1.

If this is a RAC environment, install the PSU patch using the OPatch rolling (no downtime) installation method as the PSU patch is rolling RAC installable. Refer to My Oracle Support Document 244241.1 Rolling Patch – OPatch Support for RAC.

If this is not a RAC environment, shut down all instances and listeners associated with the Oracle home that you are updating.

Apply the patch

[CDB2] oracle@localhost:~/12cPSU/19303936

$ opatch apply

Normal output here -nothing has changed

Start all dbs in the home

Start the CDB:

[setsid] to CDB
startup

If this is multitenant then start and PDBs that are not open:

select name, open_mode from v$pdbs order by 1;
NAME                                    OPEN_MODE
---------------------------------------- ------------------------------
PDB$SEED                                 READ ONLY
PDB1                                     MOUNTED
PDB2                                     MOUNTED
alter pluggable database [all|<<name>>] open;

Start the listener(s) if it was stopped too.

Load SQL into all dbs

Prior to 12c you needed to connect to all databases individually and run catbundle.sql psu apply. Now in 12c, you only need to run datapatch –verbose. This will connect to the CDB$ROOT, PDB$SEED and all **open** PDBs and run the SQL updates:

cd $ORACLE_HOME/OPatch
[CDB2] oracle@localhost:/u01/app/oracle/product/12.1.0.2/OPatch
$ ./datapatch -verbose

 

SQL Patching tool version 12.2.0.0.0 on Tue Sep 22 12:49:21 2015
Copyright (c) 2014, Oracle. All rights reserved.

 

Connecting to database...OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
catcon: ALL catcon-related output will be written to /tmp/sqlpatch_catcon__catcon_12889.lst
catcon: See /tmp/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /tmp/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions...done
Determining current state...done

 

Current state of SQL patches:
Bundle series PSU:
ID 1 in the binary registry and not installed in any PDB

 

Adding patches to installation queue and performing prereq checks...
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
   Nothing to roll back
   The following patches will be applied:
     19303936 (Database Patch Set Update : 12.1.0.2.1 (19303936))

 

Installing patches...
Patch installation complete. Total patches installed: 3

 

Validating logfiles...
Patch 19303936 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_CDBROOT_2015Sep22_12_50_10.log (no errors)
Patch 19303936 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_PDBSEED_2015Sep22_12_50_14.log (no errors)
Patch 19303936 apply (pdb PDB1): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_PDB1_2015Sep22_12_50_15.log (no errors)
SQL Patching tool complete on Tue Sep 22 12:50:19 2015

Note that PDB2 was not picked up that is because I left it as MOUNTED and not open.

So what happens now if I try and mount it?

[CDB2] oracle@localhost:/oradata/diag/rdbms/cdb2/CDB2/trace
$ sqlplus / as sysdba

 

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 22 13:13:30 2015

 

Copyright (c) 1982, 2014, Oracle. All rights reserved.

 

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

SQL> show pdbs

 

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                      READ ONLY NO
         3 PDB2                           MOUNTED
         4 PDB1                           READ WRITE NO

 

  SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;

  no rows selected

SQL> alter pluggable database pdb2 open;

 

Warning: PDB altered with errors.

 

SQL> show pdbs

 

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY NO
         3 PDB2                           READ WRITE YES
         4 PDB1                           READ WRITE NO
SQL> col time for a20
SQL> col name for a20
SQL> col cause for a20 wrap
SQL> col status for a20
SQL> col message for a60 wrap
SQL> set lines 200
SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS where cause='SQL Patch' order by name;

 

TIME                 NAME                 CAUSE               STATUS               MESSAGE
-------------------- -------------------- -------------------- -------------------- ------------------------------------------------------------
22-SEP-15 01.14.51.4 PDB2                 SQL Patch           PENDING             PSU bundle patch 1: Installed in Database Patch Set Update :
18558 PM                                                                             12.1.0.2.1 (19303936) but not in the CDB. 

So it tells us that PDB2 does not have the PSU installed.

The fix here is to rerun datapatch now that PDB2 is open:

cd /u01/app/oracle/product/12.1.0.2/OPatch
[CDB2] oracle@localhost:/u01/app/oracle/product/12.1.0.2/OPatch
$ ./datapatch -verbose
SQL Patching tool version 12.2.0.0.0 on Tue Sep 22 13:19:15 2015
Copyright (c) 2014, Oracle. All rights reserved.

 

Connecting to database...OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
catcon: ALL catcon-related output will be written to /tmp/sqlpatch_catcon__catcon_14499.lst
catcon: See /tmp/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /tmp/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions...done
Determining current state...done

 

Current state of SQL patches:
Bundle series PSU:
ID 1 in the binary registry and ID 1 in PDB CDB$ROOT, ID 1 in PDB PDB$SEED, ID 1 in PDB PDB1

 

Adding patches to installation queue and performing prereq checks...
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
   Nothing to roll back
   Nothing to apply
For the following PDBs: PDB2
   Nothing to roll back
   The following patches will be applied:
     19303936 (Database Patch Set Update : 12.1.0.2.1 (19303936))

 

Installing patches...
Patch installation complete. Total patches installed: 1

 

Validating logfiles...
Patch 19303936 apply (pdb PDB2): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_PDB2_2015Sep22_13_19_47.log (no errors)
SQL Patching tool complete on Tue Sep 22 13:19:49 2015

This time it only patched PDB2 and skipped over the others.

Now what does the database think?

[CDB2] oracle@localhost:/u01/app/oracle/product/12.1.0.2/OPatch
$ sqlplus / as sysdba

 

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 22 13:21:11 2015

 

Copyright (c) 1982, 2014, Oracle. All rights reserved.

 


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

SQL> show pdbs

 

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY NO
         3 PDB2                           READ WRITE YES
         4 PDB1                           READ WRITE NO
SQL> col time for a20
SQL> col name for a20
SQL> col cause for a20 wrap
SQL> col status for a20
SQL> col message for a60 wrap
SQL> set lines 200
SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;

 

TIME                 NAME                 CAUSE               STATUS               MESSAGE
-------------------- -------------------- -------------------- -------------------- ------------------------------------------------------------
22-SEP-15 01.14.51.4 PDB2                 SQL Patch           PENDING             PSU bundle patch 1: Installed in Database Patch Set Update :
18558 PM                                                                             12.1.0.2.1 (19303936) but not in the CDB.

Nothing, so I need to bounce PDB2:

SQL> alter pluggable database pdb2 close;

 

Pluggable database altered.

 

SQL> alter pluggable database pdb2 open;

 

Pluggable database altered.

SQL> show pdbs

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY NO
       3 PDB2                           READ WRITE NO
         4 PDB1                           READ WRITE NO

 

SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;

 

TIME                 NAME                 CAUSE                STATUS               MESSAGE
-------------------- -------------------- -------------------- -------------------- ------------------------------------------------------------
22-SEP-15 01.23.24.0 PDB2                 SQL Patch           RESOLVED            PSU bundle patch 1: Installed in Database Patch Set Update :
47678 PM                                                                             12.1.0.2.1 (19303936) but not in the CDB.

The row is still returned but the STATUS is now RESOLVED.

Moving PDB between PSU versions

In this example we will move PDB1 from CDB1 to CDB2 which is at a higher PSU.

Create new home

Create separate ORACLE_HOME and CDB2 and apply PSU at version higher than existing.

Run datapatch on new home to apply PSU to CDB$ROOT and PDB$SEED.

 

Stop application and existing PDB

Downtime will start now.

Stop the PDB1:

setsid [CDB1]
sysdba
ALTER PLUGGABLE DATABASE PDB1 CLOSE;
Pluggable database altered.

Unplug PDB1 from CDB1

Unplug the PDB into metadata xml file:

ALTER SESSION SET CONTAINER = CDB$ROOT;
Session altered.
SQL> ALTER PLUGGABLE DATABASE PDB1 UNPLUG INTO '/u01/oradata/CDB1/PDB1/PDB1.xml';
Pluggable database altered

Plug PDB1 into CDB2.

Plug metadata into CDB2:

setsid [CDB2]
sysdba
SQL> CREATE PLUGGABLE DATABASE PDB1
     USING '/u01/oradata/CDB1/PDB1/PDB1.xml'
     MOVE FILE_NAME_CONVERT = ('/u01/oradata/CDB1/PDB1','/u01/oradata/CDB2/PDB1');
Pluggable database created.

The use of the MOVE clause makes the new pluggable database creation very quick, since the database files are not copied but only moved on the file system. This operation is immediate if using the same file system.

Now open up PDB1 on CDB2:

SQL> ALTER PLUGGABLE DATABASE PDB1 OPEN;
Pluggable database altered

Load modified SQL files into the database with Datapatch tool

Run datapatch to load SQL into PDB1

setsid [CDB2]
cd $ORACLE_HOME/OPatch
./datapatch -verbose

Start application

Downtime ends and the application can be restarted

Inventory Reporting

New at 12c

Simon Pane at Pythian has produced a very useful script which we converted into a shell script http://www.pythian.com/blog/oracle-database-12c-patching-dbms_qopatch-opatch_xml_inv-and-datapatch/

You can now query what is applied to both the home and the database by querying just the database at 12c.


 

List of PSUs applied to both the $OH and the DB

 


NAME             PATCH_ID PATCH_UID ROLLBACK STATUS         DESCRIPTION
--------------- ---------- ---------- -------- --------------- ------------------------------------------------------------
JH1PDB           19769480   18350083 true    SUCCESS         Database Patch Set Update : 12.1.0.2.2 (19769480)
JHPDB             19769480   18350083 true     SUCCESS         Database Patch Set Update : 12.1.0.2.2 (19769480)
CDB$ROOT         19769480   18350083 true     SUCCESS         Database Patch Set Update : 12.1.0.2.2 (19769480)

 


PSUs installed into the $OH but not applied to the DB

 


NAME             PATCH_ID PATCH_UID DESCRIPTION
--------------- ---------- ---------- ------------------------------------------------------------
JH2PDB            19769480   18350083 Database Patch Set Update : 12.1.0.2.2 (19769480)

 


PSUs applied to the DB but not installed into the $OH

 


no rows selected

Note – PDB$SEED is normally READ ONLY and hence no record is returned from the CDB_REGISTRY_SQLPATCH view, so this PDB is excluded from this SQL.


Purge_stats – a worked example of improving performance and reducing storage

$
0
0

I have written a number of blog entries around managing the SYSAUX tablespace and more specifically the stats tables that are in there.

https://jhdba.wordpress.com/2014/07/09/tidying-up-sysaux-removing-old-snapshots-which-you-didnt-know-existed

https://jhdba.wordpress.com/2009/05/19/purging-statistics-from-the-sysaux-tablespace

https://jhdba.wordpress.com/2011/08/23/939/

This is another one around the same theme. What I offer in this one is some new scripts to identify what is working and what is not, along with a worked example with real numbers and timings.

Firstly let’s define the problem. This is a large 11.2.0.4 database running on an 8 node Exadata cluster, lots of daily loads and an ongoing problem in keeping statistics up to date. The daily automatic maintenance windows runs for 5 hours Mon-Fri and 20 hours each weekend day. Around 50% of that midweek window is spent purging stats which doesn’t leave a lot of time for much else.

How do I know that?

col CLIENT_NAME form a25
COL MEAN_JOB_DURATION form a35
col MAX_DURATION_LAST_30_DAYS form a35
col MEAN_JOB_CPU form a35
set lines 180
col window_name form a20
col job_info form a70

 

SQL> select client_name from dba_autotask_client;
 CLIENT_NAME
----------------------------------------------------------------
sql tuning advisor
auto optimizer stats collection
auto space advisor
 
SQL>select client_name, MEAN_JOB_DURATION, MEAN_JOB_CPU, MAX_DURATION_LAST_30_DAYS from dba_autotask_client;
CLIENT_NAME         MEAN_JOB_DURATION                   MEAN_JOB_CPU                        MAX_DURATION_LAST_7_DAYS
-------------------- ----------------------------------- ----------------------------------- -----------------------------------
auto optimizer stats +000000000 03:37:36.296875000       +000000000 01:35:20.270514323       +000 19:59:56
auto space advisor   +000000000 00:12:49.440677966       +000000000 00:04:45.584406780
sql tuning advisor   +000000000 00:11:53.266666667       +000000000 00:10:22.344666667
 
SQL>select client_name,window_name, job_info from DBA_AUTOTASK_JOB_HISTORY order by window_start_time
 CLIENT_NAME         WINDOW_NAME         JOB_INFO
-------------------- -------------------- ---------------------------------------------------------------------------
 auto optimizer stats WEDNESDAY_WINDOW     REASON="Stop job called because associated window was closed"
 collection
 auto optimizer stats THURSDAY_WINDOW     REASON="Stop job called because associated window was closed"
 collection
 auto optimizer stats FRIDAY_WINDOW       REASON="Stop job called because associated window was closed"
 collection
auto optimizer stats SATURDAY_WINDOW     REASON="Stop job called because associated window was closed"
 collection
 auto optimizer stats SUNDAY_WINDOW       REASON="Stop job called because associated window was closed"

 So we know that the window is timing out and that the stats collection is taking an average of 3 hours. Another way is seeing how long the individual jobs take

Col operation format a60
Col start_time format a20
Col end_time form a20
Col duration form a10
alter session set nls_date_format = 'dd-mon-yyyy hh24:mi';
     select operation||decode(target,null,null,' for '||target) operation
          ,cast(start_time as date) start_time
          ,cast(end_time as date) end_time
          ,to_char(floor(abs(cast(end_time as date)-cast(start_time as date))*86400/3600),'FM09')||':'||
          to_char(floor(mod(abs(cast(end_time as date)-cast(start_time as date))*86400,3600)/60),'FM09')||':'||
         to_char(mod(abs(cast(end_time as date)-cast(start_time as date))*86400,60),'FM09') as DURATION
    from dba_optstat_operations
    where start_time between to_date('20-sep-2015 16','dd-mon-yyyy hh24') and to_date('10-oct-2015 21','dd-mon-yyyy hh24')
    and operation ='purge_stats'
 
OPERATION                                                   START_TIME           END_TIME             DURATION
------------------------------------------------------------ -------------------- -------------------- ----------
purge_stats                                                 30-sep-2015 17:31   30-sep-2015 20:02   02:31:01
purge_stats                                                  02-oct-2015 17:22   02-oct-2015 20:00   02:37:42
purge_stats                                                 28-sep-2015 18:04   28-sep-2015 20:38   02:34:18
purge_stats                                                 29-sep-2015 17:35   29-sep-2015 20:00   02:25:04
purge_stats                                                 05-oct-2015 17:53   05-oct-2015 20:48   02:55:20
purge_stats                                                 01-oct-2015 17:17   01-oct-2015 19:28   02:11:17
 
6 rows selected.

Another (minor) factor was how much space the optimizer stats were taking in the SYSAUX tablespace. Over 500Gb which does seem a lot for 14 days

COLUMN "Item" FORMAT A25
COLUMN "Space Used (GB)" FORMAT 999.99
COLUMN "Schema" FORMAT A25
COLUMN "Move Procedure" FORMAT A40

 

     SELECT occupant_name "Item",
     space_usage_kbytes/1048576 "Space Used (GB)",
     schema_name "Schema",
     move_procedure "Move Procedure"
     FROM v$sysaux_occupants
     WHERE occupant_name in ('SM/OPTSTAT')
     ORDER BY 1;

 


Item                     Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/OPTSTAT                         527.17 SYS

I ran through some of the techniques mentioned in the posts above – checking the retention period was 14 days, checking we had no data older than that, checking that tables were being partitioned properly and split into new partitions. No obvious answer as to why the purge was taking 2.5 hours every day.   I therefore set a trace up, ran the purge manually and viewed the tkprof file

ALTER SESSION SET TRACEFILE_IDENTIFIER = "JOHN";
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
set timing on
exec dbms_stats.purge_stats(sysdate-14);

 

There seemed to be quite a few entries for sql_ids similar to the one below –each taking around 10 minutes to insert data into a partition

SQL ID: 66yqxrmjwfsnr Plan Hash: 3442357004
insert /*+ RELATIONAL("WRI$_OPTSTAT_HISTHEAD_HISTORY")
PARALLEL("WRI$_OPTSTAT_HISTHEAD_HISTORY",1) APPEND NESTED_TABLE_SET_SETID
NO_REF_CASCADE */ into "SYS"."WRI$_OPTSTAT_HISTHEAD_HISTORY" partition
("SYS_P894974") ("OBJ#","INTCOL#","SAVTIME","FLAGS","NULL_CNT","MINIMUM",
"MAXIMUM","DISTCNT","DENSITY","LOWVAL","HIVAL","AVGCLN","SAMPLE_DISTCNT",
"SAMPLE_SIZE","TIMESTAMP#","EXPRESSION","COLNAME","SPARE1","SPARE2",
"SPARE3","SPARE4","SPARE5","SPARE6") (select /*+
RELATIONAL("WRI$_OPTSTAT_HISTHEAD_HISTORY")
PARALLEL("WRI$_OPTSTAT_HISTHEAD_HISTORY",1) */ "OBJ#" ,"INTCOL#" ,"SAVTIME"
,"FLAGS" ,"NULL_CNT" ,"MINIMUM" ,"MAXIMUM" ,"DISTCNT" ,"DENSITY" ,"LOWVAL" ,
"HIVAL" ,"AVGCLN" ,"SAMPLE_DISTCNT" ,"SAMPLE_SIZE" ,"TIMESTAMP#" ,
"EXPRESSION" ,"COLNAME" ,"SPARE1" ,"SPARE2" ,"SPARE3" ,"SPARE4" ,"SPARE5" ,
"SPARE6" from "SYS"."WRI$_OPTSTAT_HISTHEAD_HISTORY" partition
("SYS_P894974") ) delete global indexes
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1     71.29     571.77     498402      26399    1322741           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     71.29     571.78     498402      26399    1322741           0

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  LOAD AS SELECT  (cr=26399 pr=498402 pw=60633 time=571778609 us)
   5398837    5398837    5398837   PARTITION RANGE SINGLE PARTITION: 577 577 (cr=22524 pr=22504 pw=0 time=9824150 us cost=3686 size=399126189 card=5183457)
   5398837    5398837    5398837    TABLE ACCESS FULL WRI$_OPTSTAT_HISTHEAD_HISTORY PARTITION: 577 577 (cr=22524 pr=22504 pw=0 time=9169817 us cost=3686 size=399126189 card=5183457)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                    415262        0.16        458.40
  db file scattered read                        370        0.16         11.96
  direct path write temp                       2707        0.16         36.48
  db file parallel read                           2        0.02          0.04
  direct path read temp                        3742        0.04          8.15
********************************************************************************

I then spent sometime reviewing that table and it’s partitions – they seemed well balanced

SQL> select partition_name from dba_tab_partitions where table_name = 'WRI$_OPTSTAT_HISTHEAD_HISTORY';

 

PARTITION_NAME
------------------------------
SYS_P909952
SYS_P908538
SYS_P906715
SYS_P905214
P_PERMANENT

 

select PARTITION_NAME,HIGH_VALUE,TABLESPACE_NAME,NUM_ROWS,LAST_ANALYZED
from DBA_TAB_PARTITIONS where table_owner ='SYS' and table_name=''WRI$_OPTSTAT_HISTHEAD_HISTORY' order by 1;

 

PARTITION_NAME                 HIGH_VALUE                                                                       TABLESPACE_NAME               NUM_ROWS LAST_ANAL
------------------------------ -------------------------------------------------------------------------------- ------------------------------ ---------- ---------
P_PERMANENT                   TO_D
ATE(' 2014-02-23 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX0 15-SEP-15
SYS_P905214                   TO_DATE(' 2015-09-29 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX 5026601 29-SEP-15
SYS_P906715                   TO_DATE(' 2015-09-30 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX 5178283 30-SEP-15
SYS_P908538                   TO_DATE(' 2015-10-01 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX 4977859 01-OCT-15
SYS_P909952                   TO_DATE(' 2015-10-09 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX

 

select trunc(SAVTIME), count(*) from WRI$_OPTSTAT_HISTGRM_HISTORY group by trunc(SAVTIME) order by 1
TRUNC(SAV   COUNT(*)
--------- ----------
18-SEP-15   22741579
19-SEP-15   24299504
20-SEP-15   22816509
21-SEP-15   24200455
22-SEP-15   24156330
23-SEP-15   24407643
24-SEP-15   23469620
25-SEP-15   23221382
26-SEP-15   25372495
27-SEP-15   23144212
28-SEP-15   23522809
29-SEP-15   24362715
30-SEP-15   25418527
01-OCT-15   24383030

I decided to rebuild the index I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST really only because it seemed to be very big for the number of rows it was containing – 192Gb


col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
where tablespace_name = 'SYSAUX'
and segment_name like 'WRI$_OPTSTAT%'
and segment_type='TABLE'
group by segment_name,segment_type order by 1 asc
/
 
        MB SEGMENT_NAME                             SEGMEN
---------- ---------------------------------------- ------
         0 WRI$_OPTSTAT_SYNOPSIS_PARTGRP            TABLE
         0 WRI$_OPTSTAT_AUX_HISTORY                 TABLE
        72 WRI$_OPTSTAT_OPR                         TABLE
       385 WRI$_OPTSTAT_SYNOPSIS_HEAD$              TABLE
       845 WRI$_OPTSTAT_IND_HISTORY                 TABLE
     1,221 WRI$_OPTSTAT_TAB_HISTORY                 TABLE

 

select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='INDEX'
group by segment_name,segment_type order by 1
/        MB SEGMENT_NAME                             SEGMEN
---------- ---------------------------------------- ------
         0 I_WRI$_OPTSTAT_AUX_ST                    INDEX
         0 I_WRI$_OPTSTAT_SYNOPPARTGRP              INDEX
         0 WRH$_PLAN_OPTION_NAME_PK                 INDEX
         0 WRH$_OPTIMIZER_ENV_PK                    INDEX
        43 I_WRI$_OPTSTAT_OPR_STIME                 INDEX
       293 I_WRI$_OPTSTAT_SYNOPHEAD                 INDEX
       649 I_WRI$_OPTSTAT_IND_ST                    INDEX
       801 I_WRI$_OPTSTAT_IND_OBJ#_ST               INDEX
       978 I_WRI$_OPTSTAT_TAB_ST                    INDEX
     1,474 I_WRI$_OPTSTAT_TAB_OBJ#_ST               INDEX
     4,968 I_WRI$_OPTSTAT_HH_ST                     INDEX
     6,807 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST            INDEX
   129,509 I_WRI$_OPTSTAT_H_ST                      INDEX
   192,304 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST           INDEX

That index dropped to 13Gb – needless to say I continued with the rest

         MB SEGMENT_NAME                             SEGMENT
---------- ---------------------------------------- ------
         0 WRH$_OPTIMIZER_ENV_PK                    INDEX
         0 I_WRI$_OPTSTAT_AUX_ST                    INDEX
         0 I_WRI$_OPTSTAT_SYNOPPARTGRP              INDEX
         0 WRH$_PLAN_OPTION_NAME_PK                 INDEX
         2 I_WRI$_OPTSTAT_OPR_STIME                 INDEX
        39 I_WRI$_OPTSTAT_IND_ST                    INDEX
        47 I_WRI$_OPTSTAT_IND_OBJ#_ST               INDEX
        56 I_WRI$_OPTSTAT_TAB_ST                    INDEX
        72 I_WRI$_OPTSTAT_TAB_OBJ#_ST               INDEX
       200 I_WRI$_OPTSTAT_SYNOPHEAD                 INDEX
       482 I_WRI$_OPTSTAT_HH_ST                     INDEX
       642 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST            INDEX
     9,159 I_WRI$_OPTSTAT_H_ST                      INDEX
    12,560 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST           INDEX

An overall saving of 300Gb

Item                     Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/OPTSTAT                         214.12 SYS

 

I wasn’t too hopeful that this would benefit the purge routine because in the trace file above you can see that it is not using an index.

exec dbms_stats.purge_stats(sysdate-14);
Elapsed: 01:47:17.15

Better but not that much so I rebuilt the tables using a move command and parallel 8 and then rebuilt the indexes again. I intentionally left the table and indexes with the parallel setting that it acquires from the move/rebuild just for testing purposes.

select 'alter table '||segment_name||' move tablespace SYSAUX parallel 8;' from dba_segments where tablespace_name = 'SYSAUX' and segment_name like '%OPT%' and segment_type='TABLE'

 

alter table WRI$_OPTSTAT_TAB_HISTORY move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_IND_HISTORY move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_AUX_HISTORY move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_OPR move tablespace SYSAUX parallel 8;
alter table WRH$_OPTIMIZER_ENV move tablespace SYSAUX parallel 8;
alter table WRH$_PLAN_OPTION_NAME move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_SYNOPSIS_PARTGRP move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_SYNOPSIS_HEAD$ move tablespace SYSAUX parallel 8;

Then rebuilt the indexes as above

Now we see the space benefits. Space down to 182Gb from 527Gb – 345Gb saved


 

Item                      Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/OPTSTAT                         182.61 SYS

The first purge with no data to purge was instantaneous

SQL> exec dbms_stats.purge_stats(sysdate-14);
Elapsed: 00:00:00.91

 

The second which was purging a days worth only took 9 minutes

SQL> exec dbms_stats.purge_stats(sysdate-13);
Elapsed: 00:09:40.65

I then put the tables and indexes back to a degree of 1 and ran a purge of another days worth of data which showed a consistent timing

SQL> exec dbms_stats.purge_stats(sysdate-12);
Elapsed: 00:09:32.05

It is clear from the number of posts on managing optimizer stats and their history that maintenance and management is required and I hope my set of posts help readers to do just that.

The following website has some very good information on it http://maccon.wikidot.com/sysaux-stats and I must credit my colleague Andy Challen  for generating some good ideas and providing a couple of scripts. Talking through a plan of action does seem to be a good way of working.

 


12.1.0.2 enhancement – large trace file produced when cursor becomes obsolete

$
0
0

A minor change came in with 12.1.0.2 which is causing large trace files to be produced under certain conditions.

The traces are produced as a result of an enhancement introduced in an unpublished bug.

The aim of the bug is to improve cursor sharing diagnostics by dumping information about an obsolete parent cursor and it’s child cursors after the parent cursor has been obsoleted N times.
A parent cursor will be marked as obsolete after a certain number of child cursors have been produced under that parent as defined by the parameter “_cursor_obsolete_threshold”. In 12.1, the default is 1024 child cursors.

A portion of the trace file is shown below – we are only seeing it in a 12.1.0.2  OEM database at the moment and the problem with obsoleted parent cursors is not affecting us in a noticeable manner, although the size of the trace files and their frequency is.

----- Cursor Obsoletion Dump sql_id=a1kj6vkgvv3ns -----
Parent cursor obsoleted 1 time(s). maxchild=1024 basephd=0xc4720a8f0 phd=0xc4720a8f0

 

The SQL being:

SELECT TP.PROPERTY_NAME, TP.PROPERTY_VALUE FROM MGMT_TARGET_PROPERTIES TP, MGMT_TARGETS T WHERE TP.TARGET_GUID = T.TARGET_GUID AND T.TARGET_NAME = :B2 AND T.TARGET_TYPE = :B1

 

The ‘feature’ is used to help Oracle support track issues with cursor sharing.

There is a parameter which can be used to stop or reduce the frequency of traces from MoS note 1955319.1

The dump can be controlled by the parameter “_kks_obsolete_dump_threshold” that can have a value between 0 and 8.

When the value is equal to 0, the obsolete cursor dump will be disabled completely:

alter system set "_kks_obsolete_dump_threshold" = 0;

When set to a value N between 1 and 8, the cursor will be dumped after cursor has been obsoleted N times:
By default, a parent cursor that is obsoleted is dumped when the parent SQL is obsoleted first time, i.e., N is 1.

alter system set "_kks_obsolete_dump_threshold" = 8;

 

The work around is to set the below underscore parameter that controls when the cursor will be dumped with a range of 0 to 8, 0 being disabled and 8 being after 8 iterations of being made obsolete etc.

We will be raising a support call re the cursor problem but probably setting the dump threshold to 8 at first and then 0 if we still keep on getting large traces. The default is set to 1.


Viewing all 189 articles
Browse latest View live