Quantcast
Channel: Oracle DBA – A lifelong learning experience
Viewing all 189 articles
Browse latest View live

Changing a database link password

$
0
0

I recently found out  that it is possible to change a database link password without dropping and recreating a database link in its entirety.

To be honest I thought this might have existed forever and I had just never come across it but it actually come out in 11GR2

The ALTER DATABASE LINK statement can be used and you do not need to specify the target service either  – all you need  is to run the following command from the user that owns a pre-existing database link

ALTER DATABASE LINK JOHN connect to USER identified by  PASSWORD;

I know it is not a major change but a quick canvas amongst fellow DBAs and nobody had noticed it’s arrival either so a heads-up might be helpful to someone



opatch lsinventory gives “line 384: [: =: unary operator expected”

$
0
0

I noticed the error message when running lsinventory against a  12.1.0.2 Oracle_Home. As the command worked I didn’t think anymore of it until on the same server against an 11.2.0.1 home I got the same error message.

opatch lsinventory
 tr: extra operand `y'
 Try `tr --help' for more information.
 /app/oracle/product/11.2.0.1/dbhome_1/OPatch/opatch: line 384: [: =: unary operator expected

There is a Mos note which provides a solution – 1551584.1

Modify following line (line number 384) in file $ORACLE_HOME/OPatch/opatch
if [ `echo $arg | tr [A-Z] [a-z]` = "-invptrloc" ]; then
to
if [ `echo $arg | tr A-Z a-z` = "-invptrloc" ]; then

However the real problem is caused by the presence of a file with a single character name in the current directory. Indeed there was such a file ‘x’ and once that was removed then the opatch lsinventory command worked as normal.

This bug appears when a new version of opatch is installed , in my case I had just added opatch version 12.1.0.6


Issue with Datapatch – AKA SQL Patching Tool after cloning a database

$
0
0

There have been a few changes in the way patches are managed and monitored in 12c and whilst looking at this I found a potential problem that might occur when you clone or copy databases around, or even build them from a template file.

Firstly when you apply a PSU and run an opatch lsinventory command you now see a description of the patch rather than just a patch number – here showing that PSU 1 has been applied. This came in at 11.2.0.3 and in my opinion is really helpful.

 

Oracle Database 12c                                                  12.1.0.2.0
There are 1 products installed in this Oracle Home.

Interim patches (1) :
Patch  19303936     : applied on Wed Feb 18 15:59:10 GMT 2015
Unique Patch ID:  18116864
Patch description:  "Database Patch Set Update : 12.1.0.2.1 (19303936)"
   Created on 6 Oct 2014, 12:01:37 hrs PST8PDT

 

The major change when applying a PSU in 12c is the datapatch utility otherwise known as the SQL Patching Tool. This is run as a post PSU installation step and populates a new view dba_registry_sqlpatch when the sql part of the PSU has been run against each database belonging to the Oracle Home that has just been patched.

I give an example below after I have applied 12.1.0.2 PSU2 to a home and then ran datapatch against the L19 database using that home.

 

[L19]/app/oracle/product/12.1.0.2/dbhome_1/OPatch $./datapatch -verbose
SQL Patching tool version 12.2.0.0.0 on Thu Feb 19 15:42:14 2015
Copyright (c) 2014, Oracle.  All rights reserved.
 Connecting to database...OK
catcon: ALL catcon-related output will be written to /tmp/sqlpatch_catcon__catcon_1739.lst
catcon: See /tmp/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /tmp/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions...done
Determining current state...done
 Current state of SQL patches:
Bundle series PSU:
  ID 2 in the binary registry and not installed in the SQL registry

 

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    19769480 (Database Patch Set Update : 12.1.0.2.2 (19769480))

Installing patches...
Patch installation complete.  Total patches installed: 1

Validating logfiles...
Patch 19769480 apply: SUCCESS
  logfile: /app/oracle/cfgtoollogs/sqlpatch/19769480/18350083/19769480_apply_L19_2015Feb19_15_42_46.log (no errors)
SQL Patching tool complete on Thu Feb 19 15:43:14 2015

 

However I did come across a couple of situations where the datapatch did not work  – giving the errors below

 $./datapatch -verbose
SQL Patching tool version 12.2.0.0.0 on Thu Feb 19 07:41:13 2015
Copyright (c) 2014, Oracle.  All rights reserved.

Connecting to database...OK
catcon: ALL catcon-related output will be written to /tmp/sqlpatch_catcon__catcon_18850.lst
catcon: See /tmp/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /tmp/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions...done

Queryable inventory could not determine the current opatch status.
Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual'
for the complete error.
Prereq check failed, exiting without installing any patches.

Please refer to MOS Note 1609718.1 for information on how to resolve the above errors.
SQL Patching tool complete on Thu Feb 19 07:41:42 2015

The MoS note referred to provides a list of documents associated with Datapatch errors but in my case was not very helpful.

The recommended sql to run gave me a bit of a clue but the log files did not provide an answer.

SQL> select dbms_sqlpatch.verify_queryable_inventory from dual;
VERIFY_QUERYABLE_INVENTORY
--------------------------------------------------------------------------------
ORA-22285: non-existent directory or file for FILEEXISTS operation

The answer was related to the lack of the correct directories. Three new directories are created to support datapatch

OPATCH_SCRIPT_DIR
/app/oracle/product/12.1.0.2/dbhome_2/QOpatch

OPATCH_LOG_DIR
/app/oracle/product/12.1.0.2/dbhome_2/QOpatch

OPATCH_INST_DIR
/app/oracle/product/12.1.0.2/dbhome_2/OPatch

 

In my case the database had been built from a pre-provisioned clone of another database. The database had a different name for the O_H – /app/oracle/product/12.1.0.2/dbhome_1

So the error message was correct, the directories were pointing to a directory that did not exist and the fix was very easy

drop directory OPATCH_SCRIPT_DIR;
drop directory OPATCH_LOG_DIR;
drop directory OPATCH_INST_DIR;
create directory  OPATCH_SCRIPT_DIR as '/app/oracle/product/12.1.0.2/dbhome_1/QOpatch';
create directory OPATCH_LOG_DIR as '/app/oracle/product/12.1.0.2/dbhome_1/QOpatch';
create directory OPATCH_INST_DIR as '/app/oracle/product/12.1.0.2/dbhome_1/OPatch';

 

So if you are likely to be using different oracle home names across servers and are copying, cloning, duplicating, pre-provisioning databases around it is worth running a post run script to check that the OPATCH directories are pointing to the correct Oracle Home.

It might also be worth writing an OEM script to check an environment to ensure that the path in the ORACLE_HOME is repeated in the OPATCH entries in DBA_DIRECTORIES

Simon Pane at Pythian has written a good blog entry around datapatch but in particular I recommend the script he has provided to see where an OH has not had a PSU applied but a database has or when a database has been added to a OH but has not had datapatch run against it


Running two oracle installations from the same terminal

$
0
0

Two posts from me on the same day. The other one about Datapatch is about a brand new utility in 12c and is probably new to most people. This post caused mixed reactions when I mentioned it at work last week. Some people laughed at my naivety in not knowing about it, others took the same view as me and were interested to hear about it as it may prove useful one day.

A colleague had a double installation of new Oracle binaries on primary and standby servers and I suggested he get someone else to run one in parallel as he could not run two installers at the same time. He came back later on to show how it could be done and prove me wrong.

When starting up Xming using the Launch script the first window that comes up has a Display number of 0 at the bottom

xming_1

 

That maps to the :0 in the DISPLAY command you export

export DISPLAY=10.3.127.4:0.0

 So if you want a second installer to run you change the display number to 1 and use

export DISPLAY=10.3.127.4:1.0

And voila you can have 2 xming installer sessions running in parallel

The last 0 (.0) is used to have multiple screens within the same display – something that is not used very frequently these days.

So if you think I have wasted your time feel free to ignore this post and if you have found it useful I would happy to get comments saying so.

 


Getting sql statements out of a trace file

$
0
0

The focus on this post started off in one direction and ended up in another. Originally I had been running a drop user script which had hung and even when I killed the process I could not drop the users as it gave a “ORA-01940: cannot drop a user that is currently connected” – despite the users having left the company months ago and there being no chance of them actually having connected sessions. My suspicions were that the drop user command actually took a lock on the users or connected as them whilst dropping them.  I was also intrigued by the length of time it took to drop users who had no objects.  Therefore I created a user, dropped it and traced the session to see what was happening. I was amazed by the size of the output file and that is where the direction changed. I wanted to find an easy way to get all the lines of code out of  trace file so that I could review them quickly.

SQL*Plus: Release 11.2.0.2.0 Production on Sat Mar 14 06:55:08 2015
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production

create user test identified by test;
User created.
set timing on
ALTER SESSION SET sql_trace=TRUE;
Session altered.
Elapsed: 00:00:00.00
drop user test;
User dropped.
Elapsed: 00:00:00.26
ALTER SESSION SET sql_trace=FALSE;
Session altered.
Elapsed: 00:00:00.00

I now had a trace file and I used the insert=filename parameter of tkprof to produce a script containing all the sql_statements in the trace file, in the same order.

tkprof xe_ora_5796.trc xe_ora_5796.tkp insert=xe_ora_5796.sql

Edit the sql file that is produced (xe_ora_57967.sql in my example) , changing the field SQL_STATEMENT from LONG to a varchar2(4000)

REM Edit and/or remove the following CREATE TABLE

REM statement as your needs dictate.

CREATE TABLE tkprof_table
(
date_of_insert                       DATE
,cursor_num                           NUMBER
,depth                               NUMBER
,user_id                            NUMBER
,parse_cnt                           NUMBER
,parse_cpu                           NUMBER
,parse_elap                           NUMBER
,parse_disk                           NUMBER
,parse_query                         NUMBER
,parse_current                       NUMBER
,parse_miss                           NUMBER
,exe_count                           NUMBER
,exe_cpu                             NUMBER
,exe_elap                             NUMBER
,exe_disk                             NUMBER
,exe_query                           NUMBER
,exe_current                         NUMBER
,exe_miss                             NUMBER
,exe_rows                             NUMBER
,fetch_count                         NUMBER
,fetch_cpu                          NUMBER
,fetch_elap                           NUMBER
,fetch_disk                           NUMBER
,fetch_query                         NUMBER
,fetch_current                       NUMBER
,fetch_rows                           NUMBER
,ticks                               NUMBER
,sql_statement                       varchar2(4000)
);

Now I ran a simple select from the created table tkprof_table and voila! – all the sql statements nicely formatted for viewing.
At this point I had decided that one of the packages early on must take a lock on the user – probably the dbms_cdc_utility.drop_user procedure,  although seems to be more focussed on dropping change tables during a drop user command.

  select substr(sql_statement,1,130) from tkprof_table;

SUBSTR(SQL_STATEMENT,1,130)
----------------------------------------------------------------------------------------------------------------------------------------------------------------
alter session set sql_trace=true
BEGIN DBMS_SESSION.set_sql_trace(sql_trace => TRUE); END;
drop user test
BEGIN
BEGIN
IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) TH

select text from view$ where rowid=:1
select obj# from RecycleBin$ where owner#=:1 and    to_number(bitand(flags, 4)) = 4
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_size, minimum, maximum, distcnt, lowval, hival, de
select obj#, type#, flags, related, bo, purgeobj, con#    from RecycleBin$ where owner#=:1
select obj#,type#,ctime,mtime,stime, status, dataobj#, flags, oid$, spare1, spare2 from obj$ where owner#=:1 and name=:2 and names
select t.ts#,t.file#,t.block#,nvl(t.bobj#,0),nvl(t.tab#,0),t.intcols,nvl(t.clucols,0),t.audit$,t.flags,t.pctfree$,t.pctused$,t.ini
select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags,i.property,i.pctfree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i.di
select pos#,intcol#,col#,spare1,bo#,spare2,spare3 from icol$ where obj#=:1
select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,nvl(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,s
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, NVL
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname from obj$ o where o.obj#=:1
select col#, grantee#, privilege#,max(mod(nvl(option$,0),2)) from objauth$ where obj#=:1 and col# is not null group by privilege#,
select grantee#,privilege#,nvl(col#,0),max(mod(nvl(option$,0),2))from objauth$ where obj#=:1 group by grantee#,privilege#,nvl(col#
select con#,obj#,rcon#,enabled,nvl(defer,0),spare2,spare3 from cdef$ where robj#=:1
select con#,type#,condlength,intcols,robj#,rcon#,match#,refact,nvl(enabled,0),rowid,cols,nvl(defer,0),mtime,nvl(spare1,0),spare2,s
select intcol#,nvl(pos#,0),col#,nvl(spare1,0) from ccol$ where con#=:1
select null from obj$ where owner#=:1 and type#!=10 union all select null from link$ where owner#=:1 union all select null from st
select u.name, o2.name, o2.obj# from ind$ i, obj$ o1, obj$ o2, user$ u where o1.owner# = :1  and o1.type# = 2 and i.type# = 9 and
select u.name, o.name, o.obj# from obj$ o, user$ u, ind$ i where o.owner#=:1 and o.owner#=u.user# and o.obj#=i.obj# and i.type#=9
begin sys.dbms_cdc_utility.drop_user(:1); end;
begin sys.dbms_parallel_execute_internal.drop_all_tasks(:1); end;
select audit$,options from procedure$ where obj#=:1
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(property,0),subname,type#,d_attrs from dependency$ d, ob
select order#,columns,types from access$ where d_obj#=:1
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece from idl_sb4$ where obj#=:1 and part=:2 and version=:3 order by piec
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ where obj#=:1 and part=:2 and version=:3 order by piec
select /*+ index(idl_char$ i_idl_char1) +*/ piece#,length,piece from idl_char$ where obj#=:1 and part=:2 and version=:3 order by p
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece from idl_ub2$ where obj#=:1 and part=:2 and version=:3 order by piec
SELECT USER_ID FROM DBA_USERS WHERE USERNAME = :B1
select cols,audit$,textlength,intcols,property,flags,rowid from view$ where obj#=:1
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 and intcol#=:2 and row#=:3 order by bucket
select col#,intcol#,toid,version#,packed,intcols,intcol#s,flags, synobj#, nvl(typidcol#, 0) from coltype$ where obj#=:1 order by i
select intcol#, toid, version#, intcols, intcol#s, flags, synobj# from subcoltype$ where obj#=:1 order by intcol# asc
select col#,intcol#,ntab# from ntab$ where obj#=:1 order by intcol# asc
select l.col#, l.intcol#, l.lobj#, l.ind#, l.ts#, l.file#, l.block#, l.chunk, l.pctversion$, l.flags, l.property, l.retention, l.f
select col#,intcol#,charsetid,charsetform from col$ where obj#=:1 order by intcol# asc
select intcol#,type,flags,lobcol,objcol,extracol,schemaoid,  elemnum from opqtype$ where obj# = :1 order by intcol# asc
select name from obj$ where owner# = :1 and type# = 82
select vname from sys.snap$ where sowner = :1 and instsite = 0  and parent_vname IS NULL
select name from sys.transformations$ where owner = :1
select queue_name from "_DBA_STREAMS_QUEUES" where queue_owner=:1
delete from system.aq$_internet_agent_privs WHERE db_username = NLS_UPPER(:1)
select decode(u.type#, 2, u.ext_username, u.name), o.name,        t.update$, t.insert$, t.delete$, t.enabled,        decode(bitand
select name from system.aq$_queue_tables where schema = :1
select o.name from rule_set$ rs, obj$ o, user$ u where u.name = :1 and  o.owner# = u.user# and o.obj# = rs.obj#
select o.name from rule$ r, obj$ o, user$ u where u.name = :1 and  o.owner# = u.user# and o.obj# = r.obj#
SELECT X.OBJNUM FROM  (select a.obj# OBJNUM, b.owner# OWNNUM   from sys.scheduler$_job a, sys.obj$ b   where a.obj# = b.obj#   uni
select bo#, intcol# from icoldep$ where obj#=:1
select a.obj# from sys.scheduler$_program a, sys.obj$ b  where a.obj# = b.obj# and b.owner# = :1
select a.obj# from sys.scheduler$_schedule a, sys.obj$ b  where a.obj# = b.obj# and b.owner# = :1
select a.obj# from sys.scheduler$_chain a, sys.obj$ b  where a.obj# = b.obj# and b.owner# = :1
select a.obj# from sys.scheduler$_destinations a, sys.obj$ b  where a.obj# = b.obj# and b.owner# = :1
select a.obj# from sys.scheduler$_file_watcher a, sys.obj$ b  where a.obj# = b.obj# and b.owner# = :1
select a.obj# from sys.scheduler$_credential a, sys.obj$ b  where a.obj# = b.obj# and b.owner# = :1
select a.obj# from sys.scheduler$_window_group a, sys.obj$ b  where a.obj# = b.obj# and b.owner# = :1
select a.obj# from sys.scheduler$_job a, sys.user$ u  where bitand(a.job_status, 1) = 1 and  u.user# = :1 and u.name = a.queue_own
DECLARE   sqlcur        NUMBER;   dummy         NUMBER;   id            NUMBER;   user_name     VARCHAR2(30) := :1;   sqltxt
SELECT USER# FROM SYS.USER$ U WHERE U.NAME = :B1
SELECT COMPARISON_ID FROM COMPARISON$ WHERE USER# = :B1
select max(obj#) from edition$
select o.name,o.type#,o.obj#,o.remoteowner,o.linkname,o.namespace, o.subname from obj$ o, tab$ t  where o.owner#=:1 and    (bitand
select name, type#, obj#, remoteowner, linkname, namespace, subname from obj$ o where (bitand(:2,2)=2) and o.owner#=:1 and type# =
select name, type#, obj#, remoteowner, linkname, namespace, subname from obj$ o where (bitand(:2,2)=2) and o.owner#=:1 and type# =
select name, type#, obj#, remoteowner, linkname, namespace, subname from obj$ o where (bitand(:2,2)=2) and o.owner#=:1 and (type#
select name, type#, obj#, remoteowner, linkname, namespace, subname from obj$ o where o.owner#=:1 and       ((bitand(:2,1)=1) and
select name, type#, obj#, remoteowner, linkname, namespace, subname from obj$ o where (bitand(:2,1)=1) and o.owner#=:1 and type#=1
select name,type#,obj#,remoteowner,linkname,namespace, subname from obj$ o where (bitand(:2,1)=1) and o.owner#=:1 and type# = 13
select name,type#,obj#,remoteowner,linkname,namespace, subname from obj$ o where (bitand(:2,1)=1) and o.owner#=:1 and type# = 14
select o.name, o.type#, o.obj#, o.remoteowner, o.linkname, o.namespace,         o.subname from obj$ o where o.owner# = :1   and (b
select name, type#, obj#, remoteowner, linkname, namespace, subname  from obj$ o  where (bitand(:2,1)=1) and o.owner#=:1 and type#
select p_obj# from edition$ where obj# = :1
select name from link$ where owner#=:1
select seq, owner, pack, proc from sys.duc$ where operation#=1  and seq > :1 or (seq = :1 and        (owner > :2 or (owner = :2 an
begin "CTXSYS"."CTX_ADM"."DROP_USER_OBJECTS"(:myuser); end;
select user#,password,datats#,tempts#,type#,defrole,resource$, ptime,decode(defschclass,NULL,'DEFAULT_CONSUMER_GROUP',defschclass)
select node,owner,name from syn$ where obj#=:1
select timestamp, flags from fixed_obj$ where obj#=:1
select inst_id,addr,ksqrsidt,ksqrsid1,ksqrsid2 from x$ksqrs where bitand(ksqrsflg,2)!=0
select  ADDR , TYPE , ID1 , ID2 from GV$RESOURCE where inst_id = USERENV('Instance')
SELECT ID1 FROM V$RESOURCE WHERE TYPE = 'RT' AND ID2 = 0 AND ROWNUM = 1
SELECT THS_ID FROM DR$THS WHERE THS_OWNER# IN (SELECT USER# FROM SYS.USER$ WHERE NAME = :B1 )
select metadata from kopm$  where name='DB_FDO'
SELECT SPL_ID FROM DR$STOPLIST WHERE SPL_OWNER# IN (SELECT USER# FROM SYS.USER$ WHERE NAME = :B1 )
SELECT SGP_ID FROM DR$SECTION_GROUP WHERE SGP_OWNER# IN (SELECT USER# FROM SYS.USER$ WHERE NAME = :B1 )
DELETE FROM DR$INDEX_SET_INDEX WHERE IXX_IXS_ID IN (SELECT IXS_ID FROM DR$INDEX_SET WHERE IXS_OWNER# IN (SELECT USER# FROM SYS.USE
DELETE FROM DR$INDEX_SET WHERE IXS_OWNER# IN (SELECT USER# FROM SYS.USER$ WHERE NAME = :B1 )
SELECT IDX_NAME, IDX_ID FROM DR$INDEX WHERE IDX_OWNER# IN (SELECT USER# FROM SYS.USER$ WHERE NAME = :B1 )
SELECT USERENV('SCHEMAID') FROM DUAL
begin "SYS"."DBMS_DEFER_IMPORT_INTERNAL"."DROP_PROPAGATOR_CASCADE"(:myuser); end;
begin "SYS"."DBMS_IJOB"."DROP_USER_JOBS"(:myuser); end;
select schema, package, flags from context$ where obj#=:1
SELECT USER# FROM SYS.USER$ WHERE NAME = :B1
DELETE FROM REGISTRY$SCHEMAS WHERE SCHEMA# = :B1
begin "SYS"."DBMS_REPCAT_RGT_UTL"."DROP_USER_TEMPLATES"(:myuser); end;
DELETE FROM SYSTEM.REPCAT$_TEMPLATE_SITES WHERE USER_NAME = :B1
begin "SYS"."DBMS_REPCAT_UTL"."DROP_USER_REPSCHEMA"(:myuser); end;
SELECT C.CHARSETID FROM SYS.COL$ C, OBJ$ O, USER$ U WHERE C.CHARSETFORM = :B1 AND U.NAME='SYSTEM' AND U.USER#=O.OWNER# AND O.NAME=
select value$ from sys.props$ where name = :1
SELECT TYPE, ONAME FROM REPCAT_GENERATED WHERE SNAME = :B1 FOR UPDATE
SELECT STATUS FROM SYSTEM.REPCAT$_REPOBJECT WHERE SNAME = :B3 AND ONAME = :B2 AND TYPE = :B1 FOR UPDATE
DELETE FROM SYSTEM.REPCAT$_REPCATLOG WHERE SNAME = :B1 AND (GNAME IS NOT NULL OR ONAME IS NOT NULL)
select baseobject,type#,update$,insert$,delete$,refnewname,refoldname,whenclause,definition,enabled,property,sys_evts,nttrigcol,nt
select owner#, status from obj$ o where obj# = :1
select tc.type#,tc.intcol#,tc.position#,c.type#, c.length,c.scale,c.precision#,c.charsetid,c.charsetform, decode(bitand(c.property
select case when (bitand(u.spare1, 16) = 0) then         0        when (u.type# = 2) then         (u.spare2)        else         1
UPDATE DBMS_ALERT_INFO SET CHANGED = 'Y', MESSAGE = :B2 WHERE NAME = UPPER(:B1 )
SELECT SID FROM DBMS_ALERT_INFO WHERE NAME = UPPER(:B1 )
SELECT DISTINCT ONAME, TYPE FROM SYSTEM.REPCAT$_REPOBJECT WHERE SNAME = :B5 AND TYPE IN (:B4 , :B3 , :B2 , :B1 ) AND ONAME NOT IN
DELETE FROM SYSTEM.REPCAT$_REPOBJECT WHERE SNAME = :B1
DELETE FROM SYSTEM.REPCAT$_GROUPED_COLUMN WHERE SNAME = :B1
DELETE FROM SYSTEM.REPCAT$_COLUMN_GROUP WHERE SNAME = :B1
DELETE FROM SYSTEM.REPCAT$_PARAMETER_COLUMN WHERE SNAME = :B1
DELETE FROM SYSTEM.REPCAT$_RESOLUTION WHERE SNAME = :B1
DELETE FROM SYSTEM.REPCAT$_CONFLICT WHERE SNAME = :B1
DELETE FROM SYSTEM.REPCAT$_REPCAT WHERE GOWNER = :B1
SELECT USER_ID FROM SYS.DBA_USERS WHERE USERNAME = :B1
SELECT COUNT(*) FROM DBA_REPGROUP_PRIVILEGES RP WHERE RP.USERNAME = :B4 AND ((:B2 = :B3 AND RP.RECEIVER = 'Y') OR (:B2 = :B1 AND R
select 1 from dual where exists (select 1 from system.repcat$_repprop prop  where prop.type in (-1,2,9,-4) and prop.how in (1,3))
begin "SYS"."DBMS_SQLTUNE_INTERNAL"."I_DROP_USER_SQLSETS"(:myuser); end;
SELECT USER# FROM USER$ WHERE NAME = :B1
SELECT ID, NAME FROM DBA_SQLSET_DEFINITIONS WHERE OWNER = :B1
begin "SYS"."DBMS_STREAMS_ADM_UTL"."PROCESS_DROP_USER_CASCADE"(:myuser); end;
SELECT U.USER# FROM USER$ U WHERE U.NAME = :B1
SELECT CAPTURE_NAME, STATUS, DECODE(FLAGS, 512, 'YES', 'NO') IS_SYNC_CAP FROM SYS.STREAMS$_CAPTURE_PROCESS WHERE CAPTURE_USERID =
SELECT APPLY_NAME, STATUS FROM SYS.STREAMS$_APPLY_PROCESS WHERE APPLY_USERID = :B1
SELECT PRIVILEGE_TYPE, PRIVILEGE_LEVEL FROM SYS.GOLDENGATE$_PRIVILEGES WHERE USERNAME = :B1 FOR UPDATE
begin "SYS"."PRVT_ADVISOR"."DELETE_USER_TASKS"(:myuser); end;
SELECT TASK_ID FROM SYS.DBA_ADVISOR_TASKS WHERE OWNER = :B1
begin "SYS"."DBMS_ISNAPSHOT"."DROP_USER_SNAPSHOTS"(:myuser); end;
DELETE FROM SYS.SLOG$ WHERE MOWNER=:B1
DELETE FROM SYS.MLOG$ WHERE MOWNER=:B1
SELECT DISTINCT SITE.SITE_NAME FROM SYS.SNAP$ S, SYS.SNAP_SITE$ SITE WHERE S.SOWNER = :B1 AND S.INSTSITE = SITE.SITE_ID
DELETE FROM SYS.SNAP$ WHERE SOWNER=:B1
DELETE FROM SYS.REG_SNAP$ WHERE SOWNER = :B2 AND SNAPSITE = :B1
DELETE FROM SYS.SNAP_COLMAP$ WHERE SOWNER = :B1
DELETE FROM SYS.SNAP_REFOP$ WHERE SOWNER = :B1
DELETE FROM SYS.SNAP_REFTIME$ WHERE SOWNER = :B1
DELETE FROM SYS.MLOG_REFCOL$ WHERE MOWNER = :B1
DELETE FROM SYS.SNAP_OBJCOL$ WHERE SOWNER = :B1
begin "SYS"."DBMS_IREFRESH"."DROP_USER_GROUPS"(:myuser); end;
DELETE FROM RGCHILD$ WHERE OWNER = :B1
select ts#,file#,block#,cols,nvl(size$,-1),pctfree$,pctused$,initrans,maxtrans,hashkeys,func,extind,avgchn,nvl(degree,1),nvl(insta
DELETE FROM RGCHILD$ WHERE REFGROUP IN (SELECT REFGROUP FROM RGROUP$ WHERE OWNER = :B1 )
DELETE FROM RGROUP$ WHERE OWNER = :B1
select OBJ#, PNAME from sys.fga$ where POWNER#=:1
select default$ from col$ where rowid=:1
select indmethod# from ind$ where obj#=:1
select max(intcol#) from col$ where obj#=:1
select parttype, partcnt, partkeycols, flags, defts#, defpctfree, defpctused, definitrans, defmaxtrans, deftiniexts, defextsize, d
SELECT count(*) FROM XDB.XDB$XDB_READY
SELECT count(rowid) FROM XDB.XDB$ROOT_INFO
select T.pathtabobj#,T.flags,T.rawsize,T.parameters.getClobVal(), T.pendtabobj#,T.snapshot from xdb.xdb$dxptab T where idxobj#=:1
select name,owner# from obj$ where obj#=:1
select count(distinct(groupname)) from xdb.xdb$xtab where idxobj#=:1
select col#,intcol#,reftyp,stabid,expctoid from refcon$ where obj#=:1 order by intcol# asc
select obj# from oid$ where user#=:1 and oid$=:2
select obj#,implobj#,property, interface_version# from indtypes$ where obj#=:1
select obj#,oper#,bind#,property,filt_nam,filt_sch, filt_typ from indop$ where obj#=:1
select audit$,properties from type_misc$ where obj#=:1
select source from source$ where obj#=:1 order by line
select audit$ from library$ where obj#=:1
delete from xdb.xdb$xidx_param_t where userid = :1
select name from sys.obj$ where owner# = :1
delete from sysauth$ where grantee#=:1 or privilege#=:1
delete from proxy_info$ where client# = :1 or proxy# = :1
select grantor#,ta.obj#,o.type# from objauth$ ta, obj$ o where grantee#=:1 and ta.obj#=o.obj# group by grantor#,ta.obj#,o.type#
select distinct subscription_name, namespace from reg$ where user# = :1
select distinct location_name from reg$ minus select distinct location_name  from reg$ where user# != :1
select blocks,maxblocks,grantor#,priv1,priv2,priv3 from tsq$ where ts#=:1 and user#=:2
select name,online$,contents$,undofile#,undoblock#,blocksize,dflmaxext,dflinit,dflincr,dflextpct,dflminext, dflminlen, owner#,scnw
select  decode(u.type#, 2, u.ext_username, u.name), o.name, trigger$.sys_evts, trigger$.type#  from obj$ o, user$ u, trigger$  whe
delete from user_history$ where user# = :1
delete sys.streams$_prepare_ddl p  where ((p.global_flag = 1 and :1 is null) or         (p.global_flag = 0 and p.usrid = :2))
BEGIN
aw_drop_proc(ora_dict_obj_type, ora_dict_obj_name, ora_dict_obj_owner);
END;

declare
stmt varchar2(200);
cnt number;
BEGIN
if sys.dbms_standard.dictionary_obj_type = 'USER' THEN
stmt := 'DE

DELETE FROM SDO_GEOM_METADATA_TABLE  WHERE SDO_OWNER = :owner
DELETE FROM SDO_MAPS_TABLE  WHERE SDO_OWNER = :owner
DELETE FROM SDO_CACHED_MAPS_TABLE  WHERE SDO_OWNER = :owner
DELETE FROM SDO_STYLES_TABLE  WHERE SDO_OWNER = :owner
DELETE FROM SDO_THEMES_TABLE  WHERE SDO_OWNER = :owner
DELETE FROM SDO_LRS_METADATA_TABLE  WHERE SDO_OWNER = :owner
DELETE FROM SDO_TOPO_METADATA_TABLE  WHERE SDO_OWNER = :owner
DELETE FROM SDO_ANNOTATION_TEXT_METADATA  WHERE F_TABLE_SCHEMA = :owner
delete from user$ where name=:1
select spare2  from user$ where type# = 2 and ext_username = :1
select user# from user$ where type# = 3 and spare2 = :1
BEGIN DBMS_SESSION.set_sql_trace(sql_trace => FALSE); END;
alter session set sql_trace=false
ALTER SESSION SET sql_trace=TRUE
SELECT /*+ ALL_ROWS */ COUNT(*) FROM DBA_POLICIES V WHERE V.OBJECT_OWNER = :B3 AND V.OBJECT_NAME = :B2 AND (V.POLICY_NAME LIKE '%x
DELETE FROM DBMS_PARALLEL_EXECUTE_TASK$ WHERE TASK_OWNER# = :B1
delete from sys.streams$_propagation_process where source_queue_schema = :1
begin dbms_rule_adm.drop_evaluation_context(:1, true); end;
SELECT PRE_ID FROM DR$PREFERENCE WHERE PRE_OWNER# IN (SELECT USER# FROM SYS.USER$ WHERE NAME = :B1 )
DELETE FROM DR$SQE WHERE SQE_OWNER# IN (SELECT USER# FROM SYS.USER$ WHERE NAME = :B1 )
SELECT COUNT(*) FROM DEFPROPAGATOR WHERE USERNAME = :B1
DELETE FROM SYS.JOB$ WHERE :B1 = POWNER OR :B1 = COWNER OR :B1 = LOWNER
begin "SYS"."DBMS_REGISTRY_SYS"."DROP_USER"(:myuser); end;
DELETE FROM REGISTRY$ WHERE INVOKER# = :B1 OR SCHEMA# = :B1
SELECT CHARSETID FROM SYS.COL$ C, OBJ$ O, USER$ U WHERE CHARSETFORM = :B1 AND U.NAME='SYSTEM' AND U.USER#=O.OWNER# AND O.NAME='REP
DELETE FROM SYSTEM.REPCAT$_AUDIT_COLUMN WHERE SNAME = :B1
COMMIT
update xdb.xdb$acl set object_value = deletexml(object_value, '/acl/ace[principal="' || :1 || '"]', 'xmlns="http://xmlns.oracle.co
select s.xmldata.schema_url     from xdb.xdb$schema s, xdb.xdb$resource r     where r.xmldata.ownerid = :1     and r.xmldata.xmlre
delete from proxy_role_info$ where client# = :1 or proxy# = :1
delete from defrole$ where user#=:1

210 rows selected.

So a simple drop user command where a user has no privileges or objects generates 210 distinct sql statements. No wonder it takes a while to run sometimes.

I was quite pleased with this but feel it worth sharing a couple of other ideas I had looked at before using the insert option of tkprof.  I couldn’t find a utility that extracts only the SQL from a trace file, other than TOAD’s trace file analyser, so Norman Dunbar, a colleague,  created an awk script. He also told me about a useful utility called a2p which is part of a perl installation which converts awk scripts to perl

a2p  sql_extract.awk> sql_extract.pl

The awk script will extract the SQL and indent it according to the depth parameter of the PARSING IN statements in the trace file.

# Extract the SQL statements from an Oracle Trace file.
#
# 11 March 2015.
#
# Scan for the PARSING IN CURSOR line.
# Print out each line after that, until you hit the END OF STMT line.
# Easy?

BEGIN {}
END {}

/^PARSING IN CURSOR/ {  \
        depth = substr($6, index($6, "=") + 1)
        not_at_end_yet = 1
        while (not_at_end_yet) {
                getline tmp
                if (tmp == "END OF STMT") {
                        not_at_end_yet = 0
                        break
                }
                if (depth == 0)
                        printf "%s\n", tmp
                else
                        printf "%*s %s\n", (depth*4), " ", tmp
        }
        printf "\n"
}

Execute as follows:

 awk -f  sql_extract.awk  tracefile_name.trc output_file.txt

and the a2p command gives the following perl script

#!/usr/bin/perl

eval 'exec /usr/bin/perl -S $0 ${1+"$@"}'

if $running_under_some_shell;

# this emulates #! processing on NIH machines.

# (remove #! line above if indigestible)

eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;

# process any FOO=bar switches

# Extract the SQL statements from an Oracle Trace file.

#

# Scan for the PARSING IN CURSOR line.

# Print out each line after that, until you hit the END OF STMT line.

# Easy?

$[ = 1;                 # set array base to 1

while (<>) {

($Fld1,$Fld2,$Fld3,$Fld4,$Fld5,$Fld6) = split(' ', $_, -1);

if (/^PARSING IN CURSOR/) {

$depth = substr($Fld6, index($Fld6, '=') + 1);

$not_at_end_yet = 1;

while ($not_at_end_yet) {

$tmp = &Getline1();

if ($tmp eq 'END OF STMT') {

$not_at_end_yet = 0;

last;

}

if ($depth == 0) {

printf "%s\n", $tmp;

}

else {

printf "%*s %s\n", ($depth * 4), ' ', $tmp;

}

}

printf "\n";

}

}

sub Getline1 {

local($_);

if ($getline_ok = (($_ = <>) ne '')) {

;

}

$_;

}

So an interesting little exercise and another couple of tools in my armourery – tkprof infile, an awk script and how to convert awk to perl


Security parameters in 11G and 12C

$
0
0

There are 5 parameters that are all prefixed with ‘sec’ in an 11g and 12c database. Actually that is a lie because one is now deprecated in 12c. They are all, as you might guess related to security. This blog is about changes in the default values and some thoughts about whether or not the default value is appropriate or not.

SEC_CASE_SENSITIVE_LOGON TRUE in 11GR1 , 11GR2, DEPRECATED IN 12C
SEC_MAX_FAILED_LOGIN_ATTEMPTS default 11GR1,11GR2=10, 12c=3
SEC_PROTOCOL_ERROR_FURTHER_ACTION default is  CONTINUE in 11GR1, 11GR2, drop, 3 in 12c
SEC_PROTOCOL_ERROR_TRACE_ACTION default is TRACE 11GR1,11GR2, 12c
SEC_RETURN_SERVER_RELEASE_BANNER default is FALSE in 11GR1, 11GR2, TRUE in 12c

 

SEC_CASE_SENSITIVE_LOGON
Let’s cover the deprecated one first. This came along in 11g (as all 5 did) and is now deprecated as 12c  is forcing case sensitive passwords. In parallel the orapwd function in 12c no longer has the ignorecase option either.
The one application that I know does not support case sensitive passwords is EBS R12.1.1 but there is a patch (12964564) if you wish to upgrade to 12c (or even continue to run at 11GR1) .

SEC_MAX_FAILED_LOGIN_ATTEMPTS
Now defaults to 3 in 12c from the 11Gx default of 10,  which is not unreasonable. This allows a client to attempt a connection 3 times (multiple accounts) before dropping the connection.

SEC_PROTOCOL_ERROR_TRACE_ACTION
This takes action if bad packets are identified as coming in from a client. The default is TRACE and again I would challenge that position. The other viable options are LOG which produces a minimal log file and  also an alert to the alert log or just ALERT or do nothing – NONE. The TRACE has the possibility of filling disk up which could have the same effect as a DoS attack which the parameter is trying to stop therefore I think I prefer the LOG option rather than just the ALERT option. However if you are alerting ensure that you have written a trap in whatever you use to parse your alert logs to spit out the message to your monitoring screens. A nice by-product of this alert is that you can now formally pass it onto the network team and you have sufficient evidence to do so.
SEC_PROTOCOL_ERROR_FURTHER_ACTION
This is where it gets a bit tricky and we have another change in 12c. In 11g the default was CONTINUE, therefore if you had tracing set in the previous parameter you could end up with a lot of logging going on. I think the CONTINUE was the correct option in 11G as you do not want to stop valid connections into a production system because the packet might look bad – not without some degree of authorisation at least.
From 12c the default has changed to DROP, 3. This means drop the connection after 3 bad packets have arrived from a client. Which sounds good as potentially a trace file will not become too big. However there is nothing stopping a client attempting many such connections, all with bad packets, which could potentially cause a DoS, not by using all your processes, but by filling your log area.
With this change of default I think it is even more important to know when connections are being dropped by the SEC_PROTOCOL_ERROR_TRACE_ACTION parameter and that is why I would suggest setting SEC_PROTOCOL_ERROR_FURTHER_ACTION to CONTINUE

SEC_RETURN_SERVER_RELEASE_BANNER
This parameter specifies whether the server returns complete database software information to unauthenticated clients. The default is FALSE  in all versions despite the 12C  documentation stating that the default is now TRUE. Oracle have chosen that default  as the whole point of security is to not give away any more information than you have to and that cannot be argued with.

So my joint recommendations for these parameters are :

  1. Ensure SEC_RETURN_SERVER_RELEASE_BANNER is set to the default of FALSE on all databases
  2. Set SEC_PROTOCOL_ERROR_TRACE_ACTION to LOG and ensure that traps are in place to capture the alerts
  3. Set SEC_PROTOCOL_ERROR_FURTHER_ACTION to CONTINUE, or at least ensure that if you are dropping connections then that has been communicated to interested parties who may not be able to get to the service they need to. A good half-way house would be to use (DELAY,3) to delay a packet by 3 seconds but it is important that all support parties are aware if this is enabled.

Finally, I will be posting a follow-up blog on Friday which covers some discrepancies in the 12c dictionary and seems to be a major issue but I need to do some more research first.


Discrepancies in v$parameter default values in 12c

$
0
0

In my last blog about security parameters I mentioned I had found some oddities in the default values for parameters in 12.1.0.2, this is a more in-depth analysis of my findings.

Taking the parameter SEC_RETURN_SERVER_RELEASE_BANNER as an example.

Prior to 12c the default value for this parameter was ‘FALSE’, whereas the documentation for 12c (https://docs.oracle.com/database/121/REFRN/refrn10275.htm) states that the default is ‘TRUE’.

To confirm this, I made a connection to a 12c (12.1.0.2) database and ran the following query:

select name, value,  default_value,  isdefault

from v$parameter

where name = 'sec_return_server_release_banner';
NAME                                     VALUE               DEFAULT_VALUE       ISDEFAULT
---------------------------------------- -------------------- -------------------- ---------
sec_return_server_release_banner         FALSE               TRUE                 TRUE

After confirming that the parameter had not been explicitly set in the parameter file, or as part of an alter system/session command, we could see that the actual value, held in ‘VALUE’, given to the parameter does not match the value provided by ‘DEFAULT_VALUE’ nor did it match the value it should have been assigned according to the documentation.

The next check was to see if this was a rogue parameter, or an indication of a larger problem:

select count(*) from v$parameter where isdefault = 'TRUE' and value <> default_value;
 COUNT(*)

———-

151

The next step was to compare the VALUE and DEFAULT_VALUE held against the values expected in the documentation and also the default values for previous versions.

 

Parameter: optimizer_use_pending_statistics
Documentation: https://docs.oracle.com/database/121/REFRN/refrn10288.htm
v$parameter.value: FALSE
12c documentation default_value: TRUE
v$parameter.default_value:  TRUE
11g default value FALSE

 

Parameter: read_only_open_delayed
Documentation: https://docs.oracle.com/database/121/REFRN/refrn10179.htm
v$parameter.value: FALSE
v$parameter.default_value: TRUE
12c documentation default_value: TRUE
11g default value FALSE

 

 

Parameter: optimizer_dynamic_sampling
Documentation: https://docs.oracle.com/database/121/REFRN/refrn10140.htm
v$parameter.value: 2
v$parameter.default_value: 32
12c documentation default_value: 2
11g default value 2

 

In all cases that were checked, the VALUE of an unaltered parameter at 12c matched that for 11g. In a number of these cases the 12c documentation incorrectly provides the wrong default value.

In some cases it appeared that the default_value had been displaced by one position in the view and that the default_value is completely inappropriate for the parameter that it has been associated with:

       NUM NAME                       VALUE                     DEFAULT_VALUE               ISDEFAULT
---------- --------------------------- ------------------------- --------------------------- ---------
     2002 undo_management             AUTO                     NULL                       TRUE
     2003 undo_tablespace             UNDOTBS1                 AUTO                       FALSE

     2564 remote_dependencies_mode   TIMESTAMP                 NULL                       TRUE
     2565 utl_file_dir                                         timestamp                   TRUE

     2741 optimizer_features_enable   12.1.0.2                 ?/rdbms/admin/sql.bsq       TRUE
     2742 fixed_date                                           12.1.0.2                   TRUE

 

As this previously was an 11.2.0.4 database that had been upgraded to 12.1.0.2, the next check was to see if this issue was as a result of a fault in the upgrade process, or due to fundamental issues with the 12c version. So two different 12c databases were checked, both of which had been originally built as 12.1.0.2 databases, and they exhibited exactly the same problems.

Document (13782248.8) on MOS covers the default_value column in v$parameter. It appears that the column was added to the v$parameter and v$system_parameter views in release 12.1.0.2 as part of an enhancement request. So this is the first version of the database to hold this additional column

Next was a check to see if the definition of the v$parameter view was incorrect at this version:

select view_definition

from v$fixed_view_definition

where view_name='GV$PARAMETER'; -- v$parameter is based on gv$parameter

 

select x.inst_id,

x.indx+1,

ksppinm,

ksppity,

ksppstvl,

ksppstdvl,

ksppstdfl,

ksppstdf,

decode(bitand(ksppiflg/256,1),1,'TRUE','FALSE'),

decode(bitand(ksppiflg/65536,3),1,'IMMEDIATE',2,'DEFERRED',3,'IMMEDIATE','FALSE'),

decode(bitand(ksppiflg/524288,1),1,'TRUE','FALSE'),

decode(bitand(ksppiflg,4),4,'FALSE', decode(bitand(ksppiflg/65536,3), 0, 'FALSE', 'TRUE')),

decode(bitand(ksppstvf,7),1,'MODIFIED',4,'SYSTEM_MOD','FALSE'),

decode(bitand(ksppstvf,2),2,'TRUE','FALSE'),

decode(bitand(ksppilrmflg/64, 1), 1, 'TRUE', 'FALSE'),

decode(bitand(ksppilrmflg/268435456,1), 1, 'TRUE', 'FALSE'),

ksppdesc,

ksppstcmnt,

ksppihash,

x.con_id

from x$ksppi x,

x$ksppcv y

where (x.indx = y.indx)

and bitand(ksppiflg,268435456) = 0

and ((translate(ksppinm,'_','#') not like '##%')

and ((translate(ksppinm,'_','#') not like '#%')

or (ksppstdf = 'FALSE')

or (bitand(ksppstvf,5) > 0))

So v$parameter is build up from two x$ tables, x$ksppi which holds the name and description of the parameter, and x$ksppcv which holds the actual value, default_value and isdefault information:

SQL> desc x$ksppcv
Name                                     Null?   Type
----------------------------------------- -------- ----------------------------
ADDR                                               RAW(8)
INDX                                              NUMBER   --- for joining to x$ksppi
INST_ID                                           NUMBER   --- inst_id
CON_ID                                             NUMBER   --- container id
KSPPSTVL                                          VARCHAR2(4000)   --- value
KSPPSTDVL                                         VARCHAR2(4000) --- display_value
KSPPSTDFL                                         VARCHAR2(255) --- default_value
KSPPSTDF                                          VARCHAR2(9)   --- isdefault
KSPPSTVF                                           NUMBER         --- basis for ismodified & isadjusted
KSPPSTCMNT                                         VARCHAR2(255) --- update_comment

So x$ksppcv looks a good candidate for being the culprit, so if we ignore the hidden underscore parameters:

select count(*)

from x$ksppcv a,

x$ksppi b

where a.indx = b.indx

and KSPPSTDF = 'TRUE'

and KSPPSTVL <> KSPPSTDFL

and KSPPINM not like '\_%' escape '\';

 

COUNT(*)
----------
151

 

This shows that the data loaded into this table is incorrect and matches the discrepancies found for v$parameter. Taking a single example of how the default for UTL_FILE_DIR can be ‘timestamp’ surely must demonstrate that there is an issue here.

I have raised an SR regarding this and I will update the post when I get more information, however on the face of the findings above it does seem that this is a major mess-up by Oracle .


One million blog views reached

$
0
0

This morning I will pass the 1 million mark for hits on this blog. My first post was written in 2008 and I remember being quite pleased with myself when I reached 5000 hits, I never dreamt of getting 1 million.

The post with the most number of hits is https://jhdba.wordpress.com/2009/05/19/purging-statistics-from-the-sysaux-tablespace/

One that I get still comments on now saying how well it explains what the SQL92_SECURITY parameter actually does

https://jhdba.wordpress.com/2009/11/23/what-does-the-sql92_security-parameter-actually-do/

I think my current favourite is a piece that I wrote with no preparation in an hour about what I think a good DBA Manager should be doing. It does not mean I am such a person but it does demonstrate what I think is important about that role

https://jhdba.wordpress.com/2013/10/10/the-10-commandments-of-being-a-good-dba-manager/

I still use many of the posts myself because the main reason for starting the blog was to capture issues and problems I had come across. One script I find very useful is

https://jhdba.wordpress.com/2011/02/28/scripts-to-resize-standby-redolog-files/

and https://jhdba.wordpress.com/2012/03/06/the-mother-of-all-asm-scripts/ is a standard script we use a lot at my site and is invaluable

Thanks to everyone for reading and being followers. I will continue to write new posts just as long as I keep on doing technical hands-on work although I am spending more time managing than doing these days. I still really enjoy being an Oracle DBA and I have never regretted moving back into that area after spending several years as a technical project manager – which I enjoyed a lot and I still do some PM’ing  now. I was the saddo who was spotted on a beach in Majorca on a family holiday reading Jonathan Lewis’s 8i book



Cloud agent set to DEBUG causing out of memory errors

$
0
0

The following technical detail was put together by a colleague John Evans and have taken it , with his permission, and wrapped some more detail around it as it seemed to be of real value to anybody who might have upgraded an agent to 12.1.0.4

Following an upgrade of the EM agent from 12.1.0.2 (or 12.1.0.3) to 12.1.0.4 after about 90 days of usage we saw a number of agents failing with out of memory errors.

We traced this down to a line in the properties file where the trace level of parameter Logger.sdklog.level=DEBUG  rather than INFO

Our view is that the property was set in 12.1.0.2 to DEBUG. However when we upgraded to 12.1.0.4 that brought in more features and thus more checks as to what the agent was doing. The out of memory errors only manifested themselves about 90 days after we had done the upgrade. A clean 12.1.0.3 or 12.1.0.4 install has INFO as the default, it is only an issue when upgrading from 12.1.0.2 to 12.1.0.4

#

#### Tracing related properties
#

#
# emagent perl tracing levels
# supported levels: DEBUG, INFO, WARN, ERROR
# default level is WARN
#
#
EMAGENT_PERL_TRACE_LEVEL=INFO

Logger.log.filename=gcagent.log
Logger.log.level=DEBUG
Logger.log.totalSize=1000
Logger.log.segment.count=20

Logger.err.filename=gcagent_errors.log
Logger.err.level=ERROR
Logger.err.totalSize=100
Logger.err.segment.count=5

Logger.sdklog.filename=gcagent_sdk.trc
Logger.sdklog.level=DEBUG *********************************************
Logger.sdklog.totalSize=25
Logger.sdklog.segment.count=5
Logger.sdklog.logger=SDK

Logger.mdu.filename=gcagent_mdu.log
Logger.mdu.level=INFO
Logger.mdu.totalSize=100
Logger.mdu.segment.count=5
Logger.mdu.logger=Mdu

 

We changed this  to INFO which is how it was before we had done the agent upgrade and we were comfortable that this resolved our issues as the 2 graphs below demonstrate.

java1

java2

Note the slow increase of dispatched actions after the parameter is set and the agent is reloaded compared to before when it climbs fairly rapidly.

However we had 1100 agents to change and we wanted to do it centrally rather than agent by agent. We raised this as an SR with Oracle and after a bit of to and froing where they suggested changing a number of kernel parameters (on 1100 servers!!!) and we point-blank insisted they answer the question we were asking they came along with internal bug
Bug 16492328 : PSR:NGAGENT:CHANGE LOGGER.SDKLOG.LEVEL FROM DEBUG TO INFO BY DEFAULT

The SR response was as follows – which would work but was a lot of effort

In 12.1.0.2, if this property contains DEBUG, then it will carry forwarded to 12.1.0.4.
If you have fresh 12.1.0.3 agent and if you upgrade this agent to 12.1.0.4, then property value should be INFO
From the uploaded emd.properties file, only following parameter shows debug enabled:
============
Logger.sdklog.level=DEBUG
===========

To turnoff the debug on agent, following command can be used:
/bin>./emctl setproperty agent -name “Logger.sdklog.level” -value “INFO”
/bin>./emctl reload

However to turnoff the debug for all the agents at one time, you need to run the above commands as part of an OS command job from console.

 

Should you ever want or need to change an agent property en-masse and don’t want to sign in to every box then you can either use the agent parameters page OR use emcli:

Using Cloud Control – Setup / manage Cloud Control / Agents

Highlight the agents you are interested in and select properties. From that screen then select the parameters tab which takes you to the screen below and you put the parameter and value in

agent_control

error_level

 

 

To run from emcli take the following steps:-

From the OMS repository select all the agent installs

select 'emcli set_agent_property -agent_name="'

||target_name

||'"-name="Logger.sdklog.level" -value=INFO'

from SYSMAN.mgmt_targets

where target_type = 'oracle_emd'

order by 1;

Which produces 1100 lines similar to the following

emcli set_agent_property -agent_name="server.domain:1830"-name="Logger.sdklog.level" -value=INFO

If you want to add a new property then just append –new to the generated command.

Spool the output to a file and login to emcli which is normally on the OMS repository, chmod +x the file… then ./file

The main problem with either approach is that the agent needs to be reloaded for teh new parameters to come into effect  but cloud control requires credentials to be enabled which is a nightmare for my current site at least. I do think there should be a tick box in cloud control to allow a reload if you want to select  that option.

The alternative is to wait until the agent is restarted or use some form of batch routine to connect to all the servers and reload the agent using the emcli command line.

 

 

 

 


Large audit trail table causes high db activity – especially when using OEM

$
0
0

On various databases, apparently unrelated we have noticed high activity that seems to be associated with the query below. The quieter the database the more the query stands out.

 

SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_count, TO_CHAR(MIN(timestamp), 'yyyy-mm-dd hh24:mi:ss') AS first_occur_time, TO_CHAR(MAX(timestamp), 'yyyy-mm-dd hh24:mi:ss') AS last_occur_time
FROM sys.dba_audit_session
WHERE returncode != 0 AND timestamp >= current_timestamp - TO_DSINTERVAL('0 0:30:00')

oem1

The detail of the audit table is that it contains 73M records and takes 8:07 minutes to run that query against.

The query is from an OEM metric and we found a bug in MoS – High Cpu Utilization from agent perl process running failedLogin.pl for audit_failed_logins metric (Doc ID 1265699.1)

Two reasons are shown for the cause

This is caused by unpublished BUG’s 9827824, 9556885, 7313103, and 7633167

There are 2 sides to this issue as follows:

1. The query run by the EM agent uses a TO_CHAR conversion in the where clause. Since this is converting the dates to characters, any index or optimization done at the table level will be ignored by the Oracle CBO (cost based optimizer).  This has been reported as unpublished bug 9556885. This bug is fixed in the 12.1.0.1 cloud control agent, and in the Grid Control 11.1.0.3 PSU3 agent.  Or there is a one off patch that can be applied to the 10.2.0.5 grid control agent (or 11.1.0 grid control agent)

2. If there is a large number of xml audit file in adump, the SQL will take longer due to unpublished BUG’s 7633167 and 7313103.   These are database bugs, which are manifested in EM.  This means that it would still be possible to see this behaviour in 12c.  (but only step 2 of the workaround below would apply to 12c).

A patch is provided as part of Apply the 11.1.0.3 PSU3 agent patch.  Or alternatively apply one off Patch:9556885 to the Grid Control management Agent that is monitoring the respective database. per the patch readme. This will fix the query that GRID uses for the metric.

However we are at agent  12.1.0.4 so the patch is not applicable. I therefore can think of 4 ways we could address this

1) Apply the individual patch 9556885 to each agent which could be a bit of a nightmare when we have 1100 agents

2) Index the audit table (aud$). Not particular standard and could cause an issue in a future upgrade so a good solution but not perfect

3) Remove the metric from the OEM template. Not ideal as it is one that is useful to us and required by security standards

4) Reduce the number of records in the table. We have a site standard to maintain 366 days worth of audit data for every database and we purge down daily to keep on top of that. However we have several databases with a high frequency of login/logout activity which is application driven where we see large audit tables and that seems  the best way to manage them. It should be noted that high audit activity does not seem to always correlate with large volume databases

In this case reducing the data to 90 days (and keeping the previous 9 months worth in a parallel table) reduced the records down to 18M and the query took less than 2 minutes. If we put a process in place to move aud$ records from the aud$ table to another table every 3 months we should be able to manage the problem.

So whilst this issue was referenced in MoS as a bug the real issue is that we were keeping too much data in the audit  table and we have to take steps to manage that data or suffer the consequences.

The lesson learnt from this is to monitor the growth of audit data and manage it so that it does not have unforseen impact on normal usage of the system.

 

 

 


Issues around recreating a standby database in 12c

$
0
0

When you create a database in 12C it now creates a resource in HAS/CRS , which isn’t a problem

However, when you come to recreate a standby database, probably because it has got such a big lag that it is quicker to recreate than recover the log files, then you will see the following error message :-

 

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/22/2015 15:45:57
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of sql command on clone_default channel at 07/22/2015 15:45:57
RMAN-11003: failure during parse/execution of SQL statement: alter system set  db_unique_name =  'STAN' comment= '' scope=spfile
ORA-32017: failure in updating SPFILE
ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

It did stump me for a while and I thought it was around having files in the ASM DATA group from the previous incarnation but removing them did not solve it.

The word ‘resource’ gave me a clue and looking at the resources using srvctl I could see that the database STAN already existed

srvctl status database -d STAN

Database is not running.

So the fix was obvious – and indeed the error message was accurate.

srvctl remove database -d STAN

Remove the database STAN? (y/[n]) Y


Startup PDB databases automatically when a container is started – good idea?

$
0
0

I posed a note on the Oracle-L Mailing list around pluggable database and why they were not opened automatically by default when the container database was opened. The post is below

I am trying to get my head around the thing about how pluggable databases react after the container database is restarted.

Pre 12.1.0.2 it was necessary to put a startup trigger in to run a ‘alter pluggable database all open;’ command to move them from mounted to open.

Now 12.1.0.2 allows you to save a state in advance using ‘alter pluggable database xxx save state’ which does seem a step forward

However why would the default not be to start all the pluggable databases (or services as they are seen) not leave them in a mounted state. Obviously Oracle have thought about this and changed the trigger method, maybe due to customer feedback but I wonder why they have not gone the whole hog and started the services automatically.

I would much prefer to have the default to be up and running rather than relying on the fact that I have saved the state previously

I did get some interesting and very helpful responses. Jared Still made a couple of good points. The first being that the opening time for all the pluggable databases might be very long if you had 300 of them. That blew my mind a little and I must admit that I had considered scenarios where you might have half a dozen maximum, not into the hundreds.

I did a little test on a virtual 2 CPU, 16Gb server, already loaded with 6 running non container databases. I created 11 pluggables (I have created a new word there) from an existing one – each one took less than 2 minutes

create pluggable database JH3PDB from JH2PDB;
create pluggable database JH4PDB from JH2PDB;
create pluggable database JH5PDB from JH2PDB;
create pluggable database JH6PDB from JH2PDB;
create pluggable database JH7PDB from JH2PDB;
create pluggable database JH8PDB from JH2PDB;
create pluggable database JH9PDB from JH2PDB;
create pluggable database JH10PDB from JH2PDB;
create pluggable database JH11PDB from JH2PDB;
create pluggable database JH12PDB from JH2PDB;
create pluggable database JH13PDB from JH2PDB;

 

They all default to the MOUNTED state so I then opened them all

SQL> alter pluggable database all open

Elapsed: 00:06:23.54

SQL> select con_id,dbid,NAME,OPEN_MODE from v$pdbs;
   CON_ID       DBID NAME                           OPEN_MODE
---------- ---------- ------------------------------ ----------
         2   99081426 PDB$SEED                      READ ONLY
         3 3520566857 JHPDB                         READ WRITE
         4 1404400467 JH1PDB                         READ WRITE
         5 1704082268 JH2PDB                         READ WRITE
         6 3352486718 JH3PDB                         READ WRITE
         7 2191215773 JH4PDB                         READ WRITE
         8 3937728224 JH5PDB                         READ WRITE
         9 731805302 JH6PDB                         READ WRITE
       10 1651785020 JH7PDB                         READ WRITE
       11 769231648 JH8PDB                         READ WRITE
       12 3682346625 JH9PDB                         READ WRITE
       13 2206923020 JH10PDB                       READ WRITE
       14 281114237 JH11PDB                       READ WRITE
       15 2251469696 JH12PDB                       READ WRITE
       16 260312931 JH13PDB                       READ WRITE

 

So 6:30 to open 11 PDBs might lead to a very long time for a very large number. That really answered the question I had asked. However there were more valuable nuggets to come.

Stefan Koehler pointed to a OTN community post where he advocated that the new (12.1.0.2) ‘save state’ for PDBs should also be extended to PDB services so that the service is started when the PDB is opened rather than having to use a custom script or a trigger. That seems a very reasonable proposal to me and will get my vote

Jared had an enhancement idea, instead of having a saved state, which I must admit is a bit messy, then why not a pdb control table with a START_MODE column?

Possible values

– NEVER:  Never open the pdb at startup

– ALWAYS: Always start this pdb.

– ON_DEMAND:  Start this pdb when someone tries to connect to it.

And then some mechanism to override this.

‘startup open cdb nomount nopdb’ for instance.

It does sound an interesting idea, especially  the ON_DEMAND option. I would have thought that if you were thinking along those lines a logical extension might be to auto unmount PDBs when they have not  been used for a while, again controllable by a table.

 


Creating standby database inc DG Broker and 12c changes

$
0
0

I thought I would refresh my knowledge of creating a standby database and at the same time include some DataGuard Broker configuration which also throws in some changes that came along with 12c

Overview

Database Name QUICKIE host server 1 ASM disk

Database Name STAN host server 2 ASM disk

Create a standby database STAN using ACTIVE DUPLICATE from the source database QUICKIE

 

QUICKIE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = QUICKIE)
)
)

STAN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = STAN)
)

server2 – listener.ora – note I have selected 1524 as that port is not currently in use and I do not want to interfere with any existing databases

 

LISTENERCLONE =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
)
)

(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = STAN)
(ORACLE_HOME = /app/oracle/product/12.1.0.2/dbhome_1)
(SID_NAME = STAN)
)
)

SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF

server2 – tnsnames.ora

STAN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = STAN)
)
)

LISTENERCLONE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = server2)(PORT = 1524))
)
(CONNECT_DATA =
(SERVICE_NAME = STAN)
)

 

  1. Start clone listener on server2

lsnrctl start LISTENERCLONE

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 18-MAY-2015 09:19:27

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:

/app/oracle/product/12.1.0.2/grid/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias

Attempting to contact (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = server1)(PORT = 1524)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = STAN)))

OK (10 msec)

 

  1. Create a pfile on server2 – $ORACLE_HOME/dbs/initSTAN.ora

 

db_unique_name=STAN

compatible='12.1.0.2'

db_name='QUICKIE’

local_listener='server2:1524'

 

 

  1. Create password file for STAN (use SOURCE DB SYS password)

 

orapwd file=orapwQUICKIE password=pI7KU4ai

 

or copy the source passwd file

Create standby logs on the primary database if they do not exist already:

alter database add standby logfile thread 1 group 4 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

alter database add standby logfile thread 1 group 5 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

alter database add standby logfile thread 1 group 6 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

alter database add standby logfile thread 1 group 7 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 50M;

 

 

  1. startup database in nomount on standby server
[oracle@server2][STAN] <strong>$sysdba</strong>

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jul 22 15:09:28 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount

ORACLE instance started.

 

  1. login to RMAN console on source server via
rman target sys/pI7KU4ai auxiliary sys/pI7KU4ai@STAN<

connected to target database: QUICKIE (DBID=4212874924)

connected to auxiliary database: QUICKIE (not mounted)

 

  1. run restore script

 

run {

allocate channel ch1 type disk;

allocate channel ch2 type disk;

allocate auxiliary channel aux1 type disk;

allocate auxiliary channel aux2 type disk;

duplicate target database for standby from active database

SPFILE

set db_unique_name='STAN'

set audit_file_dest='/app/oracle/admin/STAN/adump'

set cluster_database='FALSE'

set control_files='+DATA/STAN/control01.ctl','+FRA/STAN/control02.ctl'

set db_file_name_convert='QUICKIE','STAN'

set local_listener='server2:1522'

set log_file_name_convert='QUICKIE','STAN'

set undo_tablespace='UNDOTBS1'

set audit_trail='DB'

nofilenamecheck;

}

 

If you get the error RMAN-05537: DUPLICATE without TARGET connection when auxiliary instance is started with spfile cannot use SPFILE clause then either remove the SPFILE parameter from the RMAN duplicate line above or start the STAN database with a parameter file not a spfile.

 

In 12c it seems to create a spfile after starting with an init.ora file unless you use the syntax

startup nomount pfile=’/app/oracle/product/12.1.0.2/dbhome_1/dbs/spfileSTAN.ora’

 

I also got an error around DB_UNIQUE_NAME which is new in 12c. This is because the standby existed previously (as I re-tested my instructions for this document) and it creates a HAS /CRS resource for the database name

</pre>
RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of Duplicate Db command at 07/22/2015 15:45:57

RMAN-05501: aborting duplication of target database

RMAN-03015: error occurred in stored script Memory Script

RMAN-03009: failure of sql command on clone_default channel at 07/22/2015 15:45:57

RMAN-11003: failure during parse/execution of SQL statement: alter system set db_unique_name = 'STAN' comment= '' scope=spfile

ORA-32017: failure in updating SPFILE
<pre>ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

 

The fix is to remove that resource

srvctl remove database -d STAN

 

 

  1. Set parameters back on primary and standby and restart the databases to ensure that they are picked up
alter system reset log_archive_dest_1;

alter system reset log_archive_dest_2;

Set parameter on standby

alter system set local_listener= ‘server2:1522’ scope=both;

 

 

  1. Dataguard broker configuration created at this point. Run from primary, although can be done on either

 

Sometimes it is best to stop/start the DG Broker if it has already been created – after deleting the dr files as well

alter system set dg_broker_start=FALSE;

alter system set dg_broker_start=TRUE;

dgmgrl /

create configuration 'DGConfig1' as primary database is 'QUICKIE' connect identifier is QUICKIE;

Configuration "DGConfig1" created with primary database "QUICKIE"

add database 'STAN' as connect identifier is 'STAN' maintained as physical;

Database "STAN" added

edit database 'QUICKIE' set property 'DGConnectIdentifier'='QUICKIE';

edit database 'STAN' set property 'DGConnectIdentifier'='STAN';

The next 2 commands are required if you are not using Port 1521.

Assuming you are not using Oracle Restart (which is now deprecated anyway) you also require the static entries to be defined in your listener.ora file STAN_DGMGRL and QUICKIE_DGMGRL in this case

edit database 'STAN' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server2)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=STAN_DGMGRL)(INSTANCE_NAME=STAN)(SERVER=DEDICATED)))';

edit database 'QUICKIE' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server1)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=QUICKIE_DGMGRL)(INSTANCE_NAME=QUICKIE)(SERVER=DEDICATED)))';
;

The below entries seemed to be picked up by default but it is worth checking and correcting with the commands below if necessary.

;

1edit database 'QUICKIE' set property 'StandbyFileManagement'='AUTO';

edit database 'QUICKIE' set property 'DbFileNameConvert'='STAN,QUICKIE';

edit database 'QUICKIE' set property 'LogFileNameConvert'='STAN,QUICKIE';

edit database 'STAN' set property 'StandbyFileManagement'='AUTO';

edit database 'STAN' set property 'DbFileNameConvert'='QUICKIE,STAN';

edit database 'STAN' set property 'LogFileNameConvert'='QUICKIE,STAN';

 

Now to start the broker

 

enable configuration

show configuration

show database verbose 'QUICKIE'

show database verbose 'STAN'

validate database verbose 'QUICKIE'

validate database verbose 'STAN'

 

 

Let’s try a switchover. You need to have SUCCESS as the final line of show configuration before any switchover will work. You need to be connected as SYS/password in DGMGRL, not using DGMGRL /. The latter uses OS authentication and the former is database authentication.

DGMGRL> switchover to 'STAN';

Performing switchover NOW, please wait...

Operation requires a connection to instance "STAN" on database "STAN"

Connecting to instance "STAN"...

Connected as SYSDBA.

New primary database "STAN" is opening...

Oracle Clusterware is restarting database "QUICKIE" ...

Switchover succeeded, new primary is "STAN"

&nbsp;

However when switching back the new primary QUICKIE opened but STAN hung

New primary database "QUICKIE" is opening...

Oracle Clusterware is restarting database "STAN" ...

shut down instance "STAN" of database "STAN"

start up instance "STAN" of database "STAN"

 

The database startup has hung and eventually times out. This is an issue around Oracle Restart which is now deprecated anyway

On the primary we can see a configuration for QUICKIE but there is not one on the standby for STAN

$srvctl config database -d STAN

PRCD-1120 : The resource for database STAN could not be found.

PRCR-1001 : Resource ora.stan.db does not exist

 

srvctl add database –d STAN –oraclehome ‘/app/oracle/product/12.1.0.2/dbhome_1’ –role ‘PHYSICAl_STANDBY’

 

Re-run the switchover and all should be well.

 

 

 

 

 

 

 

 

 


Migrating tablespaces across endian platforms

$
0
0

T 4 HP-UX IA (64-bit)   Big 13 Linux x86 64-bit   Little[/code]

 

Set the tablespace EXAMPLE read only

 

 alter tablespace example read only;

Tablespace altered.

select file_name from dba_data_files;

+DATA/hpuxdb/datafile/example.352.886506805

EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list=&amp;amp;amp;amp;amp;amp;gt;'EXAMPLE', incl_constraints =&amp;amp;amp;amp;amp;amp;gt;TRUE);

PL/SQL procedure successfully completed.

SELECT * FROM transport_set_violations;

no rows selected

Export the data using the keywords transport_tablespaces

 

expdp directory=data_pump_dir transport_tablespaces=example dumpfile=hpux.dmp logfile=hpux.log Starting &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;SYS&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;.&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;SYS_EXPORT_TRANSPORTABLE_01&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;: /******** AS SYSDBA

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCACT_INSTANCE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCDEPOBJ
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table SYS.SYS_EXPORT_TRANSPORTABLE_01 successfully
loaded/unloaded

******************************************************************************

Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is: /app/oracle/admin/HPUXDB/dpdump/hpux.dmp

******************************************************************************

Datafiles required for transportable tablespace EXAMPLE: +DATA/hpuxdb/datafile/example.352.886506805

Job SYS_EXPORT_TRANSPORTABLE_01 successfully completed at 13:47:55

sftp the export dump file to the target server – I am using the data_pump_dir directory again.

You also need to copy the datafile(s) of the tablespaces you are migrating.

Mine is on ASM – switch to the ASM database and run asmcmd. I am copying out to /tmp and then copying it across to the target server on /tmp for now

 

ASMCMD> cp ‘+DATA/hpuxdb/datafile/example.352.886506805’ ‘/tmp/example.dbf’

copying +DATA/hpuxdb/datafile/example.352.886506805 -> /tmp/example.dbf

ASMCMD>

 

sftp> cd /tmp

sftp> put /tmp/example.dbf /tmp/example.dbf

Uploading /tmp/example.dbf to /tmp/example.dbf

/tmp/example.dbf                                                         100% 100MB 11.1MB/s 11.8MB/s   00:09

Max throughput: 13.9MB/s

 

Create users who own objects in the tablespace to be migrated. Note that their default tablespace (EXAMPLE) does not exist as yet.

create user sh identified by sh;

grant connect,resource to sh;
grant unlimited tablespace to sh;
grant create materialized view to sh;

create user oe identified by oe;
grant connect,resource to oe;
grant unlimited tablespace to oe;

create user hr identified by hr;
grant connect,resource to hr;
grant unlimited tablespace to hr;

create user ix identified by ix;
grant connect,resource to ix;
grant unlimited tablespace to ix;

create user pm identified by pm;
grant connect,resource to pm;
grant unlimited tablespace to pm;

create user bi identified by bi;
grant connect,resource to bi;
grant unlimited tablespace to bi;

 

CONVERT TABLESPACE EXAMPLE

TO PLATFORM ‘Linux x86 64-bit’

FORMAT=’/tmp/%U’;

 

$rmant

Recovery Manager: Release 11.1.0.7.0 - Production on Mon Aug 3 13:54:42 2015

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: HPUXDB (DBID=825022061)
using target database control file instead of recovery catalog

 

RMAN> CONVERT TABLESPACE EXAMPLE
TO PLATFORM 'Linux x86 64-bit'
FORMAT='/tmp/%U';

Starting conversion at source at 2015-08-03:13:56:18
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00005 name=+DATA/hpuxdb/datafile/example.352.886506805
converted datafile=/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03
Finished conversion at source at 2015-08-03:13:56:21

sftp> put /tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2 Uploading /tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2 to /tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2 impdp directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles=’/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2′

Import: Release 11.2.0.4.0 - Production on Mon Aug 3 13:24:06 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management and OLAP options
Master table SYS.SYS_IMPORT_TRANSPORTABLE_01 successfully loaded/unloaded

Starting SYS.SYS_IMPORT_TRANSPORTABLE_01 /******** AS SYSDBA directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles=/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_02qdm380

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK

Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC

Processing object type TRANSPORTABLE_EXPORT/TABLE

Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT

ORA-39083: Object type OBJECT_GRANT failed to create with error:

ORA-01917: user or role 'BI' does not exist

Failing sql is:

GRANT SELECT ON OE.CUSTOMERSTO BI;

ORA-39083: Object type OBJECT_GRANT failed to create with error:

ORA-01917: user or role 'BI' does not exist

Failing sql is:

GRANT SELECT ON OE.WAREHOUSES TO BI;

ORA-31685: Object type MATERIALIZED_VIEW:SH.CAL_MONTH_SALES_MV; failed due to insufficient privileges. Failing sql is:

CREATE MATERIALIZED VIEW SH.CAL_MONTH_SALES_MV (CALENDAR_MONTH_DESC, DOLLARS) USING (CAL_MONTH_SALES_MV(9, 'HPUXDB', 2, 0, 0, SH.TIMES, '2015-07-31 11:55:08', 8, 70899, '2015-07-31 11:55:09', '', 1, '0208', 822277, 0, NULL, 1, &quot;SH&quot;, &quot;SALES&quot;, '2015-07-31 11:55:08', 33032, 70841, '2015-07-31 11:55:09', '', 1, '88', 822277, 0, NULL), 1183809, 9, ('1950-01-01 12:00:00', 21,

Dropped the users in the example schema – dropped the example tablespace. Added the user and create a BI user and granted CREATE MATERIALIZED VIEW to SH.  Then re-run the import

 

impdp directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles='/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2'

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, Automatic Storage Management and OLAP options

Master table SYS SYS_IMPORT_TRANSPORTABLE_01 successfully loaded/unloaded

Starting SYS.SYS_IMPORT_TRANSPORTABLE_01/******** AS SYSDBA directory=data_pump_dir dumpfile=hpux.dmp logfile=linux.log transport_datafiles=/tmp/data_D-HPUXDB_I-825022061_TS-EXAMPLE_FNO-5_04qdm5k2

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK

Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC

Processing object type TRANSPORTABLE_EXPORT/TABLE

Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type TRANSPORTABLE_EXPORT/INDEX

Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT

Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS

Processing object type TRANSPORTABLE_EXPORT/COMMENT

Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT

Processing object type TRANSPORTABLE_EXPORT/TRIGGER

Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX

Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS

Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS

Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE

Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX

Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX

Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCACT_INSTANCE

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCDEPOBJ

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

Job SYS.SYS_IMPORT_TRANSPORTABLE_01 completed at Mon Aug 3 14:04:42 2015 elapsed 0 00:00:52

 

Post Migration

alter tablespace example read write;

alter the users created earlier to have their default tablespace as example. The imported objects are in EXAMPLE but new ones will go to USERS or whichever the defult tablespace is set to.

Remove all dmp files and interim files that were created (in /tmp on both servers in my demo).

Migrating directly from ASM to ASM

For ease in the example above I migrated out from ASM to a filesystem. Here I will demonstrate a copy going from ASM to ASM.

 

create user test identified by xxxxxxxxx default tablespace example1;

User created.

grant connect, resource to test;

Grant succeeded.

grant unlimited tablespace to test;

Grant succeeded.

connect test/xxxxxxxx

Connected.

create table example1_data tablespace example1 as select * from all_objects;

Table created.

select segment_name, tablespace_name from user_segments;

SEGMENT_NAME TABLESPACE_NAME

——————————

EXAMPLE1_DATA EXAMPLE1

 

EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list = &amp;gt; 'EXAMPLE1', incl_constraints =&amp;amp;amp;gt; TRUE);

PL/SQL procedure successfully completed.

SELECT * FROM transport_set_violations;

no rows selected

alter tablespace4 example1 read only;

Now export the metadata

expdp …..

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK

Processing object type TRANSPORTABLE_EXPORT/TABLE

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

Master table “SYS”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded

******************************************************************************

Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:

/app/oracle/admin/HPUXDB/dpdump/hpux.dmp

******************************************************************************

Datafiles required for transportable tablespace EXAMPLE1:

+DATA/hpuxdb/datafile/example1.1542.887621739

Job SYS.SYS_EXPORT_TRANSPORTABLE_01 successfully completed at 09:48:33

 

Now move the datafile to the target ASM environment. You can copy the dmp file in the same manner if you wish

ASMCMD [+] > cp –port 1522 sys@server.+ASM:+DATA/hpuxdb/datafile/example1.1542.887621739 +DATA/linuxdb/example1

Enter password: **********

copying server:+DATA/hpuxdb/datafile/example1.1542.887621739 -> +DATA/linuxdb/example1

 

The only problem with copying from ASM to ASM is that the physical file is not located in the directory where you want to copy. It is actually stored in the +DATA/ASM/DATAFILE directory (rather than the LINUXDB directory:

 

ASMCMD [+data/linuxdb] > ls -l
Type           Redund  Striped  Time             Sys  Block_Size  Blocks     Bytes     Space  Name
                                                 Y                                            CONTROLFILE/
                                                 Y                                            DATAFILE/
                                                 Y                                            ONLINELOG/
                                                 Y                                            PARAMETERFILE/
                                                 Y                                            TEMPFILE/
DATAFILE       UNPROT  COARSE   AUG 13 13:00:00  N          8192    6401  52436992  53477376  example1 => +DATA/ASM/DATAFILE/example1.362.887637415
PARAMETERFILE  UNPROT  COARSE   AUG 12 22:00:00  N           512       5      2560   1048576  spfileLINUXDB.ora => +DATA/LINUXDB/PARAMETERFILE/spfile.331.886765295

Move it to the correct folder using the cp command within asmcmd then rm from the original folder


Using DBMS_PARALLEL_EXECUTE

$
0
0

DBMS_PARALLEL_EXECUTE

We have a number of updates to partitioned tables that are run from within pl/sql blocks which have either an execute immediate ‘alter session enable parallel dml’ or execute immediate ‘alter session force parallel dml’ in the same pl/sql block. It appears that the alter session is not having any effect as we are ending up with non-parallel plans. When the same queries are run outside pl/sql either in sqlplus or sqldeveloper sessions the updates are given a parallel plan. We have a simple test pack that we have used to prove that this anomaly takes place at 11.1.0.7 (which is the version of the affected DB) and at 12.1.0.2 (to show that it is not an issue with just that version).
It appears that the optimizer is not aware of the fact that the alter session has been performed. We have also tried performing the alter session statement outside of the pl/sql block i.e. in native sqlplus environment, that also does not result in a parallel plan

Let me show a test case

Firstly we tried anonymous pl/sql block with an execute immediate for setting force dml for the session:

create table target_table

(

c1     number(6),

c2 varchar2(1024)

)

partition by range (c1)

(

partition p1 values less than (2),

partition p2 values less than (3),

partition p3 values less than (100)

)

;

create unique index target_table_pk on target_table (c1, c2) local;

alter table target_table add constraint target_table_pk primary key (c1, c2) using index;

create table source_table

(      c1     number(6),

c2       varchar2(1024)

);

insert /*+append */ into source_table (select distinct 2, owner||object_type||object_name from dba_objects);

commit;

select count(*) from source_table;

begin

execute immediate 'alter session force parallel dml';

insert /*+append parallel */ into target_table

select * from source_table;

end;

/

 

This load generates a serial plan

-----------------------------------------------------------------------------------
| Id | Operation         | Name         | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | INSERT STATEMENT   |             |       |       |   143 (100)|         |
|   1 | LOAD AS SELECT   |             |       |       |           |         |
|   2 |   TABLE ACCESS FULL| SOURCE_TABLE | 80198 |   40M|   143   (1)| 00:00:02 |
-----------------------------------------------------------------------------------

To see what the plan should look like if parallel dml was being used in a sqlplus session:

 

truncate table target_table;

alter session force parallel dml;

INSERT /*+append sqlplus*/ INTO TARGET_TABLE SELECT * FROM SOURCE_TABLE;

—————————————————————————————————————————

| Id | Operation                   | Name         | Rows | Bytes | Cost (%CPU)| Time     |   TQ |IN-OUT| PQ Distrib |

-------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT           |             |       |       |     5 (100)|         |       |     |           |
|   1 | PX COORDINATOR             |             |       |       |           |         |       |     |           |
|   2 |   PX SEND QC (RANDOM)       | :TQ10002     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,02 | P->S | QC (RAND) |
|   3 |   INDEX MAINTENANCE       | TARGET_TABLE |       |       |           |         | Q1,02 | PCWP |           |
|   4 |     PX RECEIVE            |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,02 | PCWP |           |
|   5 |     PX SEND RANGE         | :TQ10001     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,01 | P->P | RANGE     |
|   6 |       LOAD AS SELECT       |              |       |       |           |         | Q1,01 | PCWP |           |
|   7 |       PX RECEIVE           |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,01 | PCWP |           |
|   8 |         PX SEND RANDOM LOCAL| :TQ10000     | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | P->P | RANDOM LOCA|
|   9 |         PX BLOCK ITERATOR |             | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | PCWC |           |
|* 10 |           TABLE ACCESS FULL | SOURCE_TABLE | 80198 |   40M|     5   (0)| 00:00:01 | Q1,00 | PCWP |           |

 

truncate table target_table;

 

 

Oracle offered us two pieces of advice

  • Use a PARALLEL_ENABLE clause through a function
  • Use the DBMS_PARALLEL_EXECUTE package to achieve parallelism. (this is only available 11.2 onwards)

They also referred us to BUG 12734028 – PDML ONLY FROM PL/SQL DOES NOT WORK CORRECTLY
How To Enable Parallel Query For A Function? ( Doc ID 1093773.1 )

We did try the first option, the function but that failed and we did not move forward on that, concentrating on the DBMS_PARALLEL_EXECUTE package.

So the rest of this blog is around how our testing went and what results we achieved.

Starting with the same source_table contents, and an empty target table, a task needs to be created:

 

BEGIN

DBMS_PARALLEL_EXECUTE.create_task (task_name =&amp;amp;amp;amp;gt;DBMS_PARALLEL_EXECUTE.generate_task_name);

END;

/

The task name could also be set to manually, but this method does not lend itself to being proceduralised, as the name needs to be unique.

To determine the identifier for the task:

 

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE status = 'CREATED';

 

TASK_NAME                                CHUNK_TYPE   STATUS

—————————————- ———— ——————-

TASK$_141                               UNDELARED   CREATED

 

Now the source_table data set must be split into chunks in order to set up discrete subsets of data that will be handled by the subordinate tasks. This demo will split the table by rowid, but it can also be split using block counts or using the values contained in a specific column in the table.

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   =&amp;amp;amp;amp;gt; 'TASK$_141',-

table_owner = &amp;amp;gt;'SYS',-

table_name = &amp;amp;gt; 'SOURCE_TABLE',-

by_row     = &amp;amp;gt; TRUE,-

chunk_size = &amp;amp;gt; 20000)

 

 

Note that there are three procedures that can be used to create chunks

PROCEDURE CREATE_CHUNKS_BY_NUMBER_COL

PROCEDURE CREATE_CHUNKS_BY_ROWID

PROCEDURE CREATE_CHUNKS_BY_SQL

 

As the chunk size is set to 20000 , and the table contains just over 80000 rows, one would expect 5 chunks to be created:

col CHUNK_ID form 999

col TASK_NAME form a10

col START_ROWID form a20

col end_rowid form a20

set pages 60

select CHUNK_ID, TASK_NAME, STATUS, START_ROWID, END_ROWID from user_parallel_execute_chunks where TASK_NAME = 'TASK$_27' order by 1;

can we count rows in each chunk id? Desc the table

 

CHUNK_ID TASK_NAME               STATUS               START_ROWID       END_ROWID

———- ———————— ——————– —————— ——————

142 TASK$_141               UNASSIGNED           AAA2sTAABAAAbtoAAA AAA2sTAABAAAbtvCcP

143 TASK$_141               UNASSIGNED           AAA2sTAABAAAbtwAAA AAA2sTAABAAAbt3CcP

144 TASK$_141               UNASSIGNED           AAA2sTAABAAAbt4AAA AAA2sTAABAAAbt/CcP

145 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSAAAA AAA2sTAABAAAdSHCcP

146 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSIAAA AAA2sTAABAAAdSPCcP

147 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSQAAA AAA2sTAABAAAdSXCcP

148 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSYAAA AAA2sTAABAAAdSfCcP

149 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSgAAA AAA2sTAABAAAdSnCcP

150 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSoAAA AAA2sTAABAAAdSvCcP

151 TASK$_141               UNASSIGNED           AAA2sTAABAAAdSwAAA AAA2sTAABAAAdS3CcP

152 TASK$_141               UNASSIGNED           AAA2sTAABAAAdS4AAA AAA2sTAABAAAdS/CcP

153 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTAAAA AAA2sTAABAAAdTHCcP

154 TASK$_141               UNASSIGNED          AAA2sTAABAAAdTIAAA AAA2sTAABAAAdTPCcP

155 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTQAAA AAA2sTAABAAAdTXCcP

156 TASK$_141               UNASSIGNED           AAA2sTAABAAAdTYAAA AAA2sTAABAAAdTfCcP

157 TASK$_141                UNASSIGNED           AAA2sTAABAAAdTgAAA AAA2sTAABAAAdTnCcP

158 TASK$_141               UNASSIGNED           AAA2sTAABAAAdUAAAA AAA2sTAABAAAdUxCcP

159 TASK$_141               UNASSIGNED           AAA2sTAABAAAdUyAAA AAA2sTAABAAAdVjCcP

160 TASK$_141               UNASSIGNED           AAA2sTAABAAAdVkAAA AAA2sTAABAAAdV/CcP

161 TASK$_141               UNASSIGNED           AAA2sTAABAAAdWAAAA AAA2sTAABAAAdWxCcP

162 TASK$_141               UNASSIGNED           AAA2sTAABAAAdWyAAA AAA2sTAABAAAdXjCcP

163 TASK$_141               UNASSIGNED           AAA2sTAABAAAdXkAAA AAA2sTAABAAAdX/CcP

164 TASK$_141               UNASSIGNED           AAA2sTAABAAAdYAAAA AAA2sTAABAAAdYxCcP

165 TASK$_141                UNASSIGNED           AAA2sTAABAAAdYyAAA AAA2sTAABAAAdZjCcP

166 TASK$_141               UNASSIGNED           AAA2sTAABAAAdZkAAA AAA2sTAABAAAdZ/CcP

167 TASK$_141               UNASSIGNED           AAA2sTAABAAAdaAAAA AAA2sTAABAAAdaxCcP

168 TASK$_141               UNASSIGNED           AAA2sTAABAAAdayAAA AAA2sTAABAAAdbjCcP

169 TASK$_141               UNASSIGNED           AAA2sTAABAAAdbkAAA AAA2sTAABAAAdb/CcP

 

28 rows selected.

Tests were run changing the chunk_size to 40000 and still 28 chunks were created.

Looking at the details of the main task in USER_PARALLEL_EXECUTE_TASKS:

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE task_name = 'TASK$_141';

TASK_NAME                               CHUNK_TYPE   STATUS

---------------------------------------- ------------ -------------------

TASK$_141                               ROWID_RANGE CHUNKED

It details that the task has been split into discrete ranges of data, and shows how these ranges were determined.

To execute the insert into the target table we create an anonymous pl/sql block, declare sql that is to be run with the addition of an additional predicate for a range of rowids:

 

The ranges are passed in from the ranges specified in the user_parallel_execute_chunks table

 

DECLARE

v_sql VARCHAR2(1024);

BEGIN

v_sql := 'insert /*+PARALLEL APPEND */ into target_table

select * from source_table

where rowid between :start_id and :end_id';

dbms_parallel_execute.run_task(task_name     = &amp;amp;gt; 'TASK$_141',

sql_stmt       = &gt; v_sql,

language_flag = &gt; DBMS_SQL.NATIVE,

parallel_level = &gt;; 5);

END;

/

PL/SQL procedure successfully completed.

Checking the contents of the target table:

select count(*) from target_table;

COUNT(*)

----------

86353

Now looking at the details of the main task in USER_PARALLEL_EXECUTE_TASKS:

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

WHERE task_name = 'TASK$_141';

TASK_NAME                               STATUS

---------------------------------------- -------------------

TASK$_141                               FINISHED

 

Looking at the user_parallel_execute_chunks entries after completion:

select CHUNK_ID, JOB_NAME, START_TS, END_TS

from user_parallel_execute_chunks

where TASK_NAME = 'TASK$_141' order by 2,3;

CHUNK_ID JOB_NAME             START_TS                       END_TS

---------- -------------------- ------------------------------ -------------------------------

142 TASK$_7410_1         30-JUL-15 02.34.31.672484 PM   30-JUL-15 02.34.31.872794 PM

144 TASK$_7410_1         30-JUL-15 02.34.31.877091 PM   30-JUL-15 02.34.32.204513 PM

146 TASK$_7410_1         30-JUL-15 02.34.32.209950 PM   30-JUL-15 02.34.32.331349 PM

148 TASK$_7410_1         30-JUL-15 02.34.32.335192 PM   30-JUL-15 02.34.32.528391 PM

150 TASK$_7410_1         30-JUL-15 02.34.32.533488 PM   30-JUL-15 02.34.32.570243 PM

152 TASK$_7410_1         30-JUL-15 02.34.32.575450 PM   30-JUL-15 02.34.32.702353 PM

154 TASK$_7410_1         30-JUL-15 02.34.32.710860 PM   30-JUL-15 02.34.32.817684 PM

156 TASK$_7410_1         30-JUL-15 02.34.32.828963 PM   30-JUL-15 02.34.32.888834 PM

158 TASK$_7410_1         30-JUL-15 02.34.32.898458 PM   30-JUL-15 02.34.33.493985 PM

160 TASK$_7410_1         30-JUL-15 02.34.33.499254 PM   30-JUL-15 02.34.33.944356 PM

162 TASK$_7410_1         30-JUL-15 02.34.33.953509 PM   30-JUL-15 02.34.34.366352 PM

164 TASK$_7410_1         30-JUL-15 02.34.34.368668 PM   30-JUL-15 02.34.34.911471 PM

166 TASK$_7410_1         30-JUL-15 02.34.34.915205 PM   30-JUL-15 02.34.35.524515 PM

168 TASK$_7410_1         30-JUL-15 02.34.35.527515 PM   30-JUL-15 02.34.35.889198 PM

169 TASK$_7410_1         30-JUL-15 02.34.35.889872 PM   30-JUL-15 02.34.35.890412 PM

143 TASK$_7410_2         30-JUL-15 02.34.31.677235 PM   30-JUL-15 02.34.32.129181 PM

145 TASK$_7410_2         30-JUL-15 02.34.32.135013 PM   30-JUL-15 02.34.32.304761 PM

147 TASK$_7410_2         30-JUL-15 02.34.32.310140 PM   30-JUL-15 02.34.32.485545 PM

149 TASK$_7410_2         30-JUL-15 02.34.32.495971 PM   30-JUL-15 02.34.32.550955 PM

151 TASK$_7410_2         30-JUL-15 02.34.32.558335 PM   30-JUL-15 02.34.32.629274 PM

153 TASK$_7410_2         30-JUL-15 02.34.32.644917 PM   30-JUL-15 02.34.32.764337 PM

155 TASK$_7410_2         30-JUL-15 02.34.32.773029 PM   30-JUL-15 02.34.32.857794 PM

157 TASK$_7410_2         30-JUL-15 02.34.32.864875 PM   30-JUL-15 02.34.32.908799 PM

159 TASK$_7410_2         30-JUL-15 02.34.32.913982 PM   30-JUL-15 02.34.33.669704 PM

161 TASK$_7410_2         30-JUL-15 02.34.33.672077 PM   30-JUL-15 02.34.34.128170 PM

163 TASK$_7410_2         30-JUL-15 02.34.34.140102 PM   30-JUL-15 02.34.34.624627 PM

165 TASK$_7410_2         30-JUL-15 02.34.34.628145 PM   30-JUL-15 02.34.35.431037 PM

167 TASK$_7410_2         30-JUL-15 02.34.35.433282 PM   30-JUL-15 02.34.35.885741 PM
28 rows selected.

 

From these details there appears to be only two jobs processing the sub-tasks even though a parallel_level of 5 was specified.   Why is that? Not enough data to break it up more? Back end resourcing?????

Looking at the details of the query in a tkprof listing of one of the jobs’ trace files you can see that each of the jobs is executing the same plan that was generated with the original pl/sql: , however they are running two tasks in parallel

 

insert /*+PARALLEL APPEND */ into target_table
           select * from
source_table
             where rowid between :start_id and :end_id


call     count       cpu   elapsed       disk     query   current       rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse       1     0.00       0.00         0         0         0           0
Execute     6     0.26       3.03         4       599       8128       17849
Fetch        0     0.00       0.00         0         0         0           0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total       7     0.26       3.03         4       599       8128       17849

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS   (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
         0         0         0 LOAD AS SELECT (cr=59 pr=0 pw=8 time=92689 us)
     1240       1240       1240   FILTER (cr=13 pr=0 pw=0 time=501 us)
     1240       1240       1240   TABLE ACCESS BY ROWID RANGE SOURCE_TABLE (cr=13 pr=0 pw=0 time=368 us cost=42 size=122892 card=228)

 

I think that I could do more testing in this area and try and determine answers to the following questions

 

Why is the parallel parameter not appearing to work?

Does it depend on data volumes or back-end resources?

 

What I was interested in seeing was what was the breakdown of chunks by row count – was it an even split?

select a.chunk_id, count(*)

from user_parallel_execute_chunks a,

source_table b

where a.TASK_NAME = 'TASK$_107'

and b.rowid between a.START_ROWID and a.END_ROWID

group by a.chunk_id

order by 1;

  CHUNK_ID   COUNT(*)
---------- ----------
       108       1407
       109       1233
       110       1217
       111       1193
       112       1192
       113       1274
       114       1213
       115       1589
       116       1191
       117       1226
       118       1190
       119       1201
       120       1259
       121       1273
       122       1528
       123       1176
       124      15874
       125       4316
       126      15933
       127       4283
       128       9055

Reasonably even with most chunks selecting around 1200 rows each but 2 chunks had 15K in each so not that well chunked.

Repeating the process but chunking by blocks this time produced the following results

 

 

SQL> select task_name, chunk_id, dbms_rowid.rowid_block_number(start_rowid) Start_Id,

2 dbms_rowid.rowid_block_number(end_rowid) End_Id,

3 dbms_rowid.rowid_block_number(end_rowid) – dbms_rowid.rowid_block_number(start_rowid) num_blocks

4 from user_parallel_execute_chunks order by task_name, chunk_id;

 

TASK_NAME             CHUNK_ID   START_ID     END_ID NUM_BLOCKS

——————– ———- ———- ———- ———-

 

TASK$_133                   134     70096     70103         7

TASK$_133                   135     70104     70111         7

TASK$_133                   136     70112     70119         7

TASK$_133                   137     70120     70127         7

TASK$_133                   138     70128     70135         7

TASK$_133                   139     70136     70143         7

TASK$_133                   140     73344     73351        7

TASK$_133                   141     73352     73359         7

TASK$_133                   142     73360     73367         7

TASK$_133                   143     73368     73375         7

TASK$_133                   144     73376     73383        7

TASK$_133                   145     73384     73391         7

TASK$_133                   146     73392     73399         7

TASK$_133                   147     73400     73407         7

TASK$_133                   148     73408     73415         7

TASK$_133                   149     73416     73423         7

TASK$_133                   150     73472     73521         49

TASK$_133                   151     73522     73571         49

TASK$_133                   152     73572     73599         27

TASK$_133                   153     73600     73649         49

TASK$_133                   154     73650     73699         49

TASK$_133                   155     73700     73727         27

TASK$_133                   156     73728      73777         49

TASK$_133                   157     73778     73827         49

TASK$_133                   158     73828     73855         27

 

So again not very well split. Perhaps there needs to be more data to make it worthwhile.

 

Error handling:  

 

Finally we can have a look at how it handles errors. To save the reader time the simple answer is ‘not very well’.

Let’s force a duplicate key error to show how the package handles errors. The original 80000+ rows are left in the target_table, and the same set of data will be reinserted – which will cause a unique constraint violation for the PK:

 

BEGIN

DBMS_PARALLEL_EXECUTE.create_task (task_name => DBMS_PARALLEL_EXECUTE.generate_task_name);

END;

/

 

PL/SQL procedure successfully completed.

 

 

SELECT task_name, chunk_type, status

FROM   user_parallel_execute_tasks

where status = ‘CREATED’;

 

TASK_NAME                               CHUNK_TYPE   STATUS

—————————————- ———— ——————-

TASK$_170                               UNDELARED   CREATED

 

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => ‘TASK$_170’,-

table_owner => ‘SYS’,-

table_name => ‘SOURCE_TABLE’,-

by_row     => TRUE,-

chunk_size => 40000)

DECLARE

v_sql VARCHAR2(1024);

BEGIN

v_sql := ‘insert /*+PARALLEL APPEND */ into target_table

select * from source_table

where rowid between :start_id and :end_id’;

 

dbms_parallel_execute.run_task(task_name     => ‘TASK$_170’,

sql_stmt       => v_sql,

language_flag => DBMS_SQL.NATIVE,

parallel_level => 10);

END;

/

 

PL/SQL procedure successfully completed.

 

select count(*) from target_table;

 

COUNT(*)

———-

86353

No error is thrown by the run_task call, but the number of rows has not increased. Checking the status of the task shows that there was indeed an error:

 

select task_name, status from user_parallel_execute_tasks where task_name = ‘TASK$_170’);

 

TASK_NAME                               STATUS

—————————————- ——————-

TASK$_170                               FINISHED_WITH_ERROR

The error details are given in the user_parallel_execute_chunks view:

 

select TASK_NAME, ERROR_MESSAGE, STATUS

from user_parallel_execute_chunks

where TASK_NAME = ‘TASK$_170’ order by 2

 

TASK_NAME       ERROR_MESSAGE                                               STATUS

————— ———————————————————— —————————————-

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170      ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170       ORA-00001: unique constraint (SYS.TARGET_TABLE_PK) violated PROCESSED_WITH_ERROR

TASK$_170                                                                   PROCESSED

TASK$_170                                                                   PROCESSED

 

28 rows selected.

SQL> /

 

exec DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => ‘TASK$_133’,-

table_owner => ‘SYS’,-

table_name => ‘SOURCE_TABLE’,-

by_row     => FALSE,-

chunk_size => 20);



Using SYSBACKUP in 12c with a media manager layer

$
0
0

I think most large sites who have multiple support teams are aware of how the phrase “Segregation of Duties” is impacting the DBA world. The basic principle, that one user should not be able to, for instance, add a user, grant it privileges, let the user run scripts and then drop the user and remove all log files is a sound one and cannot be argued with.

With the release of 12c Oracle e added three new users to perform administrative tasks. Each user as a corresponding privilege with the same name as the user, which is a bit confusing.

SYSBACKUP – for RMAN backup and recovery work

SYSDG –  to manage DataGuard operations

SYSKM – to manage activities involving ‘key management’ including wallets and Database Vault

I have no real experience of key management so cannot comment on that. I do fail to see which type of user would be allowed to manage a DG setup and yet not be allowed to perform other DBA work on the databases, however it probably does mean that any requirement to login as ‘sysdba’ is now reduced which can only be a good thing.

 

The SYSBACKUP user is a really good idea and has been a long-time coming

The privileges it has, along with select on many sys views are

STARTUP
SHUTDOWN
ALTER DATABASE
ALTER SYSTEM
ALTER SESSION
ALTER TABLESPACE
CREATE CONTROLFILE
CREATE ANY DIRECTORY
CREATE ANY TABLE
CREATE ANY CLUSTER
CREATE PFILE
CREATE RESTORE POINT (including GUARANTEED restore points)
CREATE SESSION
CREATE SPFILE
DROP DATABASE
DROP TABLESPACE
DROP RESTORE POINT (including GUARANTEED restore points)
FLASHBACK DATABASE
RESUMABLE
UNLIMITED TABLESPACE
SELECT ANY DICTIONARY
SELECT ANY TRANSACTION

One aspect I was keen to look at was if we could amend the connect string we use in our Media Manager Layer  – Commvault from Simpana from having to connect using a ‘user/password as sysdba’ string

Unfortunately at the moment there is no way of changing the connect string to use the user SYSBACKUP. Simpana will be releasing Version 11 sometime later this year which will have be able to interact with the SYSBACKUP user, however I am unclear as to whether the requirement  to connect as SYSDBA will be removed or not.

I am not aware of how other MMLs such as Networker, Netbackup or Data Protector have been updated to include the 12c changes and I am keen to find out.


Identifying database link usage

$
0
0

As part of ongoing security reviews I wanted to determine if all database links on production systems were in use. That is not very easy to do and this article is a listing of some of the options I have considered to get that information and how it is now possible from 11GR2 onwards.

The first option was to look and see if auditing can be used. The manual states “You can audit statements that refer to tables, views, sequences, standalone stored procedures or functions, and packages, but not individual procedures within packages. (See “Auditing Functions, Procedures, Packages, and Triggers” for more information about auditing these types of objects.)

You cannot directly audit statements that reference clusters, database links, indexes, or synonyms. However, you can indirectly audit access to these schema objects, by auditing the operations that affect the base table.”

So you could audit activities on a base table that a database link might utilise, probably via a synonym. However that would show all table usage but it would be very difficult to break it down to see if a database link had been involved.

On the assumption that the code has a call to “@db_link_name” you could probably trawl ASH data or v$sql to see if a reference is available. It would be more likely that a synonym would be in use and as we have said above, we cannot audit synonym usage but you could maybe find it in v$sql. Again very work intensive with no guaranteed return.

There has been an enhancement request in MoS since 2006 – search for Bug 5098260

Jared Still posted a routine, although he does not claim to be the original author,  which shows a link being actually used. However in reality that is not really a good way of capturing information across many systems unless you enable an excessive amount of tracing or monitoring across all systems. I have demoed usage of it below and it does work

I’ve created a DB link from SNAPCL1A to SNAPTM1. First I opened the DB link:

 

select sysdate from dual@snaptm1;
SYSDATE
---------
22-SEP-15

 

I can see my DB link being opened in v$dblink (in my own session):

select a.db_link, u.username, logged_on, open_cursors, in_transaction, update_sent

from v$dblink a, all_users u

where a.owner_id = u.user_id;
DB_LINK                        USERNAME                       LOG OPEN_CURSORS IN_ UPD
------------------------------ ------------------------------ --- ------------ --- ---
SNAPTM1                        SYS                            YES            0 YES NO

The following script can be used to see open DB link sessions (on both databases). It can be executed from any session and it will only show open DB links (that have not been committed, rolled back or manually closed/terminated on the origin database):

col origin for a30

col "GTXID" for a30

col lsession for a10

col username for a20

col waiting for a50

Select /*+ ORDERED */

substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10)      "ORIGIN",

substr(g.K2GTITID_ORA,1,35) "GTXID",

substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,

s2.username,

decode(bitand(ksuseidl,11),

1,'ACTIVE',

0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),

2,'SNIPED',

3,'SNIPED',

'KILLED'

) "State",

substr(w.event,1,30) "WAITING"

from  x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2

where  g.K2GTDXCB =t.ktcxbxba

and   g.K2GTDSES=t.ktcxbses

and  s.addr=g.K2GTDSES

and  w.sid=s.indx

and s2.sid = w.sid;

 

On the origin database:

 

ORIGIN                         GTXID                          LSESSION   USERNAME             State    WAITING
------------------------------ ------------------------------ ---------- -------------------- -------- --------------------------------------------------
teora01x-3762                  SNAPCL1A.cc76ea8a.7.32.983     125.5      SYS                  INACTIVE SQL*Net message from client

 

On the destination database:

 

ORIGIN                         GTXID                          LSESSION   USERNAME             State    WAITING
------------------------------ ------------------------------ ---------- -------------------- -------- --------------------------------------------------
teora01x-4065                  SNAPCL1A.cc76ea8a.7.32.983     133.599    SYSTEM               INACTIVE SQL*Net message from client

 

Now, I rollback my session on the origin database:

 

SQL> rollback;
Rollback complete.

If I query the v$dblink view, I still see my link there, but the transaction is closed now:

 

select a.db_link, u.username, logged_on, open_cursors, in_transaction, update_sent

from v$dblink a, all_users u

where a.owner_id = u.user_id  2    3  ;

 

DB_LINK                        USERNAME             LOG OPEN_CURSORS IN_ UPD
------------------------------ -------------------- --- ------------ --- ---
SNAPTM1                        SYS                  YES            0 NO  NO

The script will not return anything at this point:

SQL> Select /*+ ORDERED */

substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10)      "ORIGIN",

substr(g.K2GTITID_ORA,1,35) "GTXID",

substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,

s2.username,

decode(bitand(ksuseidl,11),

1,'ACTIVE',

0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),

2,'SNIPED',

3,'SNIPED',

'KILLED'

) "State",

substr(w.event,1,30) "WAITING"

from  x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2

where  g.K2GTDXCB =t.ktcxbxba

and   g.K2GTDSES=t.ktcxbses

and  s.addr=g.K2GTDSES

and  w.sid=s.indx

and s2.sid = w.sid;

no rows selected 

However since at least 11.2.0.3 Oracle have provided a better means of identifying database link usage after the event and not just during

Databases TST11204 and QUICKIE on different servers  -both have dblinks to each other –

TST11204

create user dblinktest identified by Easter2012 ;

grant create session, create database link to dblinktest;

SQL> connect dblinktest/xxxxxxxxxx

Connected.

SQL>  select * from test@quickie;

C1 C2

---------- -----

5 five

create database link TST11204 connect to dblinkrecd identified by xxxxxx using 'TST11204';

select * from test@TST11204

At this point we have made a connection  – lets see what we can find out about it. I would advise using the timestamp# column of aud$ to reduce the volume of data that has to be searched.

SQL> select userid, terminal, comment$text from sys.aud$ where comment$text like 'DBLINK%';

 

USERID  TERMINAL        COMMENT$TEXT
--------------------------------------------------------------------------------
                        DBLINKRECD DBLINK_INFO: (SOURCE_GLOBAL_NAME=QUICKIE.25386385)

This information is in both source and target databases

It will return the source of a database link session. Specifically, it returns a string of the form:

SOURCE_GLOBAL_NAME=dblink_src_global_name, DBLINK_NAME=dblink_name, SOURCE_AUDIT_SESSIONID=dblink_src_audit_sessionid

where:

dblink_src_global_name is the unique global name of the source database

dblink_name is the name of the database link on the source database

dblink_src_audit_sessionid is the audit session ID of the session on the source database that initiated the connection to the remote database using dblink_name.

So hopefully that might help in identifying if a database link is still in use or when it was last used and it can be used as another part of your security toolkit.

 


Manage a PSU into a CDB having several PDBS

$
0
0

I thought it would be a good idea to show how to apply PSU into the CDB and PDB that came with 12c. I start off with a quick reminder about how 11g worked and then move into examples of 12c

11G reminder

get the latest version of opatch

check for conflicts

opatch prereq  CheckConflictAgainstOHWithDetail -ph ./

start the downtime

Stop all the databases in the home (one node at a atime for RAC)

apply the patch

opatch apply

start all databases in the home

load SQL into the databases

@catbundle.sql psu apply

end of downtime (but remember to do the standby)

Example of 12c PSU process

Update opatch to latest version

Download and apply patch 6880880 to the oracle home

Check for conflicts with one-offs

Run the prereq check for conflicts:

[CDB2] oracle@localhost:~/12cPSU/19303936

$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./

Oracle Interim Patch Installer version 12.1.0.1.8
Copyright (c) 2015, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0.2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.2/oraInst.loc
OPatch version   : 12.1.0.1.8
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/opatch2015-09-22_12-12-54PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

No conflicts should be reported.

Stop all dbs in the home (one node at a time for RAC)

If you are using a Data Guard Physical Standby database, you must install this patch on both the primary database and the physical standby database, as described by My Oracle Support Document 278641.1.

If this is a RAC environment, install the PSU patch using the OPatch rolling (no downtime) installation method as the PSU patch is rolling RAC installable. Refer to My Oracle Support Document 244241.1 Rolling Patch – OPatch Support for RAC.

If this is not a RAC environment, shut down all instances and listeners associated with the Oracle home that you are updating.

Apply the patch

[CDB2] oracle@localhost:~/12cPSU/19303936

$ opatch apply

Normal output here -nothing has changed

Start all dbs in the home

Start the CDB:

[setsid] to CDB
startup

If this is multitenant then start and PDBs that are not open:

select name, open_mode from v$pdbs order by 1;
NAME                                    OPEN_MODE
---------------------------------------- ------------------------------
PDB$SEED                                 READ ONLY
PDB1                                     MOUNTED
PDB2                                     MOUNTED
alter pluggable database [all|<<name>>] open;

Start the listener(s) if it was stopped too.

Load SQL into all dbs

Prior to 12c you needed to connect to all databases individually and run catbundle.sql psu apply. Now in 12c, you only need to run datapatch –verbose. This will connect to the CDB$ROOT, PDB$SEED and all **open** PDBs and run the SQL updates:

cd $ORACLE_HOME/OPatch
[CDB2] oracle@localhost:/u01/app/oracle/product/12.1.0.2/OPatch
$ ./datapatch -verbose

 

SQL Patching tool version 12.2.0.0.0 on Tue Sep 22 12:49:21 2015
Copyright (c) 2014, Oracle. All rights reserved.

 

Connecting to database...OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
catcon: ALL catcon-related output will be written to /tmp/sqlpatch_catcon__catcon_12889.lst
catcon: See /tmp/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /tmp/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions...done
Determining current state...done

 

Current state of SQL patches:
Bundle series PSU:
ID 1 in the binary registry and not installed in any PDB

 

Adding patches to installation queue and performing prereq checks...
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
   Nothing to roll back
   The following patches will be applied:
     19303936 (Database Patch Set Update : 12.1.0.2.1 (19303936))

 

Installing patches...
Patch installation complete. Total patches installed: 3

 

Validating logfiles...
Patch 19303936 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_CDBROOT_2015Sep22_12_50_10.log (no errors)
Patch 19303936 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_PDBSEED_2015Sep22_12_50_14.log (no errors)
Patch 19303936 apply (pdb PDB1): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_PDB1_2015Sep22_12_50_15.log (no errors)
SQL Patching tool complete on Tue Sep 22 12:50:19 2015

Note that PDB2 was not picked up that is because I left it as MOUNTED and not open.

So what happens now if I try and mount it?

[CDB2] oracle@localhost:/oradata/diag/rdbms/cdb2/CDB2/trace
$ sqlplus / as sysdba

 

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 22 13:13:30 2015

 

Copyright (c) 1982, 2014, Oracle. All rights reserved.

 

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

SQL> show pdbs

 

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                      READ ONLY NO
         3 PDB2                           MOUNTED
         4 PDB1                           READ WRITE NO

 

  SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;

  no rows selected

SQL> alter pluggable database pdb2 open;

 

Warning: PDB altered with errors.

 

SQL> show pdbs

 

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY NO
         3 PDB2                           READ WRITE YES
         4 PDB1                           READ WRITE NO
SQL> col time for a20
SQL> col name for a20
SQL> col cause for a20 wrap
SQL> col status for a20
SQL> col message for a60 wrap
SQL> set lines 200
SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS where cause='SQL Patch' order by name;

 

TIME                 NAME                 CAUSE               STATUS               MESSAGE
-------------------- -------------------- -------------------- -------------------- ------------------------------------------------------------
22-SEP-15 01.14.51.4 PDB2                 SQL Patch           PENDING             PSU bundle patch 1: Installed in Database Patch Set Update :
18558 PM                                                                             12.1.0.2.1 (19303936) but not in the CDB. 

So it tells us that PDB2 does not have the PSU installed.

The fix here is to rerun datapatch now that PDB2 is open:

cd /u01/app/oracle/product/12.1.0.2/OPatch
[CDB2] oracle@localhost:/u01/app/oracle/product/12.1.0.2/OPatch
$ ./datapatch -verbose
SQL Patching tool version 12.2.0.0.0 on Tue Sep 22 13:19:15 2015
Copyright (c) 2014, Oracle. All rights reserved.

 

Connecting to database...OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
catcon: ALL catcon-related output will be written to /tmp/sqlpatch_catcon__catcon_14499.lst
catcon: See /tmp/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /tmp/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions...done
Determining current state...done

 

Current state of SQL patches:
Bundle series PSU:
ID 1 in the binary registry and ID 1 in PDB CDB$ROOT, ID 1 in PDB PDB$SEED, ID 1 in PDB PDB1

 

Adding patches to installation queue and performing prereq checks...
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
   Nothing to roll back
   Nothing to apply
For the following PDBs: PDB2
   Nothing to roll back
   The following patches will be applied:
     19303936 (Database Patch Set Update : 12.1.0.2.1 (19303936))

 

Installing patches...
Patch installation complete. Total patches installed: 1

 

Validating logfiles...
Patch 19303936 apply (pdb PDB2): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/19303936/18116864/19303936_apply_CDB2_PDB2_2015Sep22_13_19_47.log (no errors)
SQL Patching tool complete on Tue Sep 22 13:19:49 2015

This time it only patched PDB2 and skipped over the others.

Now what does the database think?

[CDB2] oracle@localhost:/u01/app/oracle/product/12.1.0.2/OPatch
$ sqlplus / as sysdba

 

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 22 13:21:11 2015

 

Copyright (c) 1982, 2014, Oracle. All rights reserved.

 


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

SQL> show pdbs

 

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY NO
         3 PDB2                           READ WRITE YES
         4 PDB1                           READ WRITE NO
SQL> col time for a20
SQL> col name for a20
SQL> col cause for a20 wrap
SQL> col status for a20
SQL> col message for a60 wrap
SQL> set lines 200
SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;

 

TIME                 NAME                 CAUSE               STATUS               MESSAGE
-------------------- -------------------- -------------------- -------------------- ------------------------------------------------------------
22-SEP-15 01.14.51.4 PDB2                 SQL Patch           PENDING             PSU bundle patch 1: Installed in Database Patch Set Update :
18558 PM                                                                             12.1.0.2.1 (19303936) but not in the CDB.

Nothing, so I need to bounce PDB2:

SQL> alter pluggable database pdb2 close;

 

Pluggable database altered.

 

SQL> alter pluggable database pdb2 open;

 

Pluggable database altered.

SQL> show pdbs

   CON_ID CON_NAME                       OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY NO
       3 PDB2                           READ WRITE NO
         4 PDB1                           READ WRITE NO

 

SQL> select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;

 

TIME                 NAME                 CAUSE                STATUS               MESSAGE
-------------------- -------------------- -------------------- -------------------- ------------------------------------------------------------
22-SEP-15 01.23.24.0 PDB2                 SQL Patch           RESOLVED            PSU bundle patch 1: Installed in Database Patch Set Update :
47678 PM                                                                             12.1.0.2.1 (19303936) but not in the CDB.

The row is still returned but the STATUS is now RESOLVED.

Moving PDB between PSU versions

In this example we will move PDB1 from CDB1 to CDB2 which is at a higher PSU.

Create new home

Create separate ORACLE_HOME and CDB2 and apply PSU at version higher than existing.

Run datapatch on new home to apply PSU to CDB$ROOT and PDB$SEED.

 

Stop application and existing PDB

Downtime will start now.

Stop the PDB1:

setsid [CDB1]
sysdba
ALTER PLUGGABLE DATABASE PDB1 CLOSE;
Pluggable database altered.

Unplug PDB1 from CDB1

Unplug the PDB into metadata xml file:

ALTER SESSION SET CONTAINER = CDB$ROOT;
Session altered.
SQL> ALTER PLUGGABLE DATABASE PDB1 UNPLUG INTO '/u01/oradata/CDB1/PDB1/PDB1.xml';
Pluggable database altered

Plug PDB1 into CDB2.

Plug metadata into CDB2:

setsid [CDB2]
sysdba
SQL> CREATE PLUGGABLE DATABASE PDB1
     USING '/u01/oradata/CDB1/PDB1/PDB1.xml'
     MOVE FILE_NAME_CONVERT = ('/u01/oradata/CDB1/PDB1','/u01/oradata/CDB2/PDB1');
Pluggable database created.

The use of the MOVE clause makes the new pluggable database creation very quick, since the database files are not copied but only moved on the file system. This operation is immediate if using the same file system.

Now open up PDB1 on CDB2:

SQL> ALTER PLUGGABLE DATABASE PDB1 OPEN;
Pluggable database altered

Load modified SQL files into the database with Datapatch tool

Run datapatch to load SQL into PDB1

setsid [CDB2]
cd $ORACLE_HOME/OPatch
./datapatch -verbose

Start application

Downtime ends and the application can be restarted

Inventory Reporting

New at 12c

Simon Pane at Pythian has produced a very useful script which we converted into a shell script http://www.pythian.com/blog/oracle-database-12c-patching-dbms_qopatch-opatch_xml_inv-and-datapatch/

You can now query what is applied to both the home and the database by querying just the database at 12c.


 

List of PSUs applied to both the $OH and the DB

 


NAME             PATCH_ID PATCH_UID ROLLBACK STATUS         DESCRIPTION
--------------- ---------- ---------- -------- --------------- ------------------------------------------------------------
JH1PDB           19769480   18350083 true    SUCCESS         Database Patch Set Update : 12.1.0.2.2 (19769480)
JHPDB             19769480   18350083 true     SUCCESS         Database Patch Set Update : 12.1.0.2.2 (19769480)
CDB$ROOT         19769480   18350083 true     SUCCESS         Database Patch Set Update : 12.1.0.2.2 (19769480)

 


PSUs installed into the $OH but not applied to the DB

 


NAME             PATCH_ID PATCH_UID DESCRIPTION
--------------- ---------- ---------- ------------------------------------------------------------
JH2PDB            19769480   18350083 Database Patch Set Update : 12.1.0.2.2 (19769480)

 


PSUs applied to the DB but not installed into the $OH

 


no rows selected

Note – PDB$SEED is normally READ ONLY and hence no record is returned from the CDB_REGISTRY_SQLPATCH view, so this PDB is excluded from this SQL.


Purge_stats – a worked example of improving performance and reducing storage

$
0
0

I have written a number of blog entries around managing the SYSAUX tablespace and more specifically the stats tables that are in there.

https://jhdba.wordpress.com/2014/07/09/tidying-up-sysaux-removing-old-snapshots-which-you-didnt-know-existed

https://jhdba.wordpress.com/2009/05/19/purging-statistics-from-the-sysaux-tablespace

https://jhdba.wordpress.com/2011/08/23/939/

This is another one around the same theme. What I offer in this one is some new scripts to identify what is working and what is not, along with a worked example with real numbers and timings.

Firstly let’s define the problem. This is a large 11.2.0.4 database running on an 8 node Exadata cluster, lots of daily loads and an ongoing problem in keeping statistics up to date. The daily automatic maintenance windows runs for 5 hours Mon-Fri and 20 hours each weekend day. Around 50% of that midweek window is spent purging stats which doesn’t leave a lot of time for much else.

How do I know that?

col CLIENT_NAME form a25
COL MEAN_JOB_DURATION form a35
col MAX_DURATION_LAST_30_DAYS form a35
col MEAN_JOB_CPU form a35
set lines 180
col window_name form a20
col job_info form a70

 

SQL> select client_name from dba_autotask_client;
 CLIENT_NAME
----------------------------------------------------------------
sql tuning advisor
auto optimizer stats collection
auto space advisor
 
SQL>select client_name, MEAN_JOB_DURATION, MEAN_JOB_CPU, MAX_DURATION_LAST_30_DAYS from dba_autotask_client;
CLIENT_NAME         MEAN_JOB_DURATION                   MEAN_JOB_CPU                        MAX_DURATION_LAST_7_DAYS
-------------------- ----------------------------------- ----------------------------------- -----------------------------------
auto optimizer stats +000000000 03:37:36.296875000       +000000000 01:35:20.270514323       +000 19:59:56
auto space advisor   +000000000 00:12:49.440677966       +000000000 00:04:45.584406780
sql tuning advisor   +000000000 00:11:53.266666667       +000000000 00:10:22.344666667
 
SQL>select client_name,window_name, job_info from DBA_AUTOTASK_JOB_HISTORY order by window_start_time
 CLIENT_NAME         WINDOW_NAME         JOB_INFO
-------------------- -------------------- ---------------------------------------------------------------------------
 auto optimizer stats WEDNESDAY_WINDOW     REASON="Stop job called because associated window was closed"
 collection
 auto optimizer stats THURSDAY_WINDOW     REASON="Stop job called because associated window was closed"
 collection
 auto optimizer stats FRIDAY_WINDOW       REASON="Stop job called because associated window was closed"
 collection
auto optimizer stats SATURDAY_WINDOW     REASON="Stop job called because associated window was closed"
 collection
 auto optimizer stats SUNDAY_WINDOW       REASON="Stop job called because associated window was closed"

 So we know that the window is timing out and that the stats collection is taking an average of 3 hours. Another way is seeing how long the individual jobs take

Col operation format a60
Col start_time format a20
Col end_time form a20
Col duration form a10
alter session set nls_date_format = 'dd-mon-yyyy hh24:mi';
     select operation||decode(target,null,null,' for '||target) operation
          ,cast(start_time as date) start_time
          ,cast(end_time as date) end_time
          ,to_char(floor(abs(cast(end_time as date)-cast(start_time as date))*86400/3600),'FM09')||':'||
          to_char(floor(mod(abs(cast(end_time as date)-cast(start_time as date))*86400,3600)/60),'FM09')||':'||
         to_char(mod(abs(cast(end_time as date)-cast(start_time as date))*86400,60),'FM09') as DURATION
    from dba_optstat_operations
    where start_time between to_date('20-sep-2015 16','dd-mon-yyyy hh24') and to_date('10-oct-2015 21','dd-mon-yyyy hh24')
    and operation ='purge_stats'
 
OPERATION                                                   START_TIME           END_TIME             DURATION
------------------------------------------------------------ -------------------- -------------------- ----------
purge_stats                                                 30-sep-2015 17:31   30-sep-2015 20:02   02:31:01
purge_stats                                                  02-oct-2015 17:22   02-oct-2015 20:00   02:37:42
purge_stats                                                 28-sep-2015 18:04   28-sep-2015 20:38   02:34:18
purge_stats                                                 29-sep-2015 17:35   29-sep-2015 20:00   02:25:04
purge_stats                                                 05-oct-2015 17:53   05-oct-2015 20:48   02:55:20
purge_stats                                                 01-oct-2015 17:17   01-oct-2015 19:28   02:11:17
 
6 rows selected.

Another (minor) factor was how much space the optimizer stats were taking in the SYSAUX tablespace. Over 500Gb which does seem a lot for 14 days

COLUMN "Item" FORMAT A25
COLUMN "Space Used (GB)" FORMAT 999.99
COLUMN "Schema" FORMAT A25
COLUMN "Move Procedure" FORMAT A40

 

     SELECT occupant_name "Item",
     space_usage_kbytes/1048576 "Space Used (GB)",
     schema_name "Schema",
     move_procedure "Move Procedure"
     FROM v$sysaux_occupants
     WHERE occupant_name in ('SM/OPTSTAT')
     ORDER BY 1;

 


Item                     Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/OPTSTAT                         527.17 SYS

I ran through some of the techniques mentioned in the posts above – checking the retention period was 14 days, checking we had no data older than that, checking that tables were being partitioned properly and split into new partitions. No obvious answer as to why the purge was taking 2.5 hours every day.   I therefore set a trace up, ran the purge manually and viewed the tkprof file

ALTER SESSION SET TRACEFILE_IDENTIFIER = "JOHN";
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
set timing on
exec dbms_stats.purge_stats(sysdate-14);

 

There seemed to be quite a few entries for sql_ids similar to the one below –each taking around 10 minutes to insert data into a partition

SQL ID: 66yqxrmjwfsnr Plan Hash: 3442357004
insert /*+ RELATIONAL("WRI$_OPTSTAT_HISTHEAD_HISTORY")
PARALLEL("WRI$_OPTSTAT_HISTHEAD_HISTORY",1) APPEND NESTED_TABLE_SET_SETID
NO_REF_CASCADE */ into "SYS"."WRI$_OPTSTAT_HISTHEAD_HISTORY" partition
("SYS_P894974") ("OBJ#","INTCOL#","SAVTIME","FLAGS","NULL_CNT","MINIMUM",
"MAXIMUM","DISTCNT","DENSITY","LOWVAL","HIVAL","AVGCLN","SAMPLE_DISTCNT",
"SAMPLE_SIZE","TIMESTAMP#","EXPRESSION","COLNAME","SPARE1","SPARE2",
"SPARE3","SPARE4","SPARE5","SPARE6") (select /*+
RELATIONAL("WRI$_OPTSTAT_HISTHEAD_HISTORY")
PARALLEL("WRI$_OPTSTAT_HISTHEAD_HISTORY",1) */ "OBJ#" ,"INTCOL#" ,"SAVTIME"
,"FLAGS" ,"NULL_CNT" ,"MINIMUM" ,"MAXIMUM" ,"DISTCNT" ,"DENSITY" ,"LOWVAL" ,
"HIVAL" ,"AVGCLN" ,"SAMPLE_DISTCNT" ,"SAMPLE_SIZE" ,"TIMESTAMP#" ,
"EXPRESSION" ,"COLNAME" ,"SPARE1" ,"SPARE2" ,"SPARE3" ,"SPARE4" ,"SPARE5" ,
"SPARE6" from "SYS"."WRI$_OPTSTAT_HISTHEAD_HISTORY" partition
("SYS_P894974") ) delete global indexes
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1     71.29     571.77     498402      26399    1322741           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     71.29     571.78     498402      26399    1322741           0

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  LOAD AS SELECT  (cr=26399 pr=498402 pw=60633 time=571778609 us)
   5398837    5398837    5398837   PARTITION RANGE SINGLE PARTITION: 577 577 (cr=22524 pr=22504 pw=0 time=9824150 us cost=3686 size=399126189 card=5183457)
   5398837    5398837    5398837    TABLE ACCESS FULL WRI$_OPTSTAT_HISTHEAD_HISTORY PARTITION: 577 577 (cr=22524 pr=22504 pw=0 time=9169817 us cost=3686 size=399126189 card=5183457)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                    415262        0.16        458.40
  db file scattered read                        370        0.16         11.96
  direct path write temp                       2707        0.16         36.48
  db file parallel read                           2        0.02          0.04
  direct path read temp                        3742        0.04          8.15
********************************************************************************

I then spent sometime reviewing that table and it’s partitions – they seemed well balanced

SQL> select partition_name from dba_tab_partitions where table_name = 'WRI$_OPTSTAT_HISTHEAD_HISTORY';

 

PARTITION_NAME
------------------------------
SYS_P909952
SYS_P908538
SYS_P906715
SYS_P905214
P_PERMANENT

 

select PARTITION_NAME,HIGH_VALUE,TABLESPACE_NAME,NUM_ROWS,LAST_ANALYZED
from DBA_TAB_PARTITIONS where table_owner ='SYS' and table_name=''WRI$_OPTSTAT_HISTHEAD_HISTORY' order by 1;

 

PARTITION_NAME                 HIGH_VALUE                                                                       TABLESPACE_NAME               NUM_ROWS LAST_ANAL
------------------------------ -------------------------------------------------------------------------------- ------------------------------ ---------- ---------
P_PERMANENT                   TO_D
ATE(' 2014-02-23 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX0 15-SEP-15
SYS_P905214                   TO_DATE(' 2015-09-29 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX 5026601 29-SEP-15
SYS_P906715                   TO_DATE(' 2015-09-30 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX 5178283 30-SEP-15
SYS_P908538                   TO_DATE(' 2015-10-01 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX 4977859 01-OCT-15
SYS_P909952                   TO_DATE(' 2015-10-09 09:00:52', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYSAUX

 

select trunc(SAVTIME), count(*) from WRI$_OPTSTAT_HISTGRM_HISTORY group by trunc(SAVTIME) order by 1
TRUNC(SAV   COUNT(*)
--------- ----------
18-SEP-15   22741579
19-SEP-15   24299504
20-SEP-15   22816509
21-SEP-15   24200455
22-SEP-15   24156330
23-SEP-15   24407643
24-SEP-15   23469620
25-SEP-15   23221382
26-SEP-15   25372495
27-SEP-15   23144212
28-SEP-15   23522809
29-SEP-15   24362715
30-SEP-15   25418527
01-OCT-15   24383030

I decided to rebuild the index I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST really only because it seemed to be very big for the number of rows it was containing – 192Gb


col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
where tablespace_name = 'SYSAUX'
and segment_name like 'WRI$_OPTSTAT%'
and segment_type='TABLE'
group by segment_name,segment_type order by 1 asc
/
 
        MB SEGMENT_NAME                             SEGMEN
---------- ---------------------------------------- ------
         0 WRI$_OPTSTAT_SYNOPSIS_PARTGRP            TABLE
         0 WRI$_OPTSTAT_AUX_HISTORY                 TABLE
        72 WRI$_OPTSTAT_OPR                         TABLE
       385 WRI$_OPTSTAT_SYNOPSIS_HEAD$              TABLE
       845 WRI$_OPTSTAT_IND_HISTORY                 TABLE
     1,221 WRI$_OPTSTAT_TAB_HISTORY                 TABLE

 

select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='INDEX'
group by segment_name,segment_type order by 1
/        MB SEGMENT_NAME                             SEGMEN
---------- ---------------------------------------- ------
         0 I_WRI$_OPTSTAT_AUX_ST                    INDEX
         0 I_WRI$_OPTSTAT_SYNOPPARTGRP              INDEX
         0 WRH$_PLAN_OPTION_NAME_PK                 INDEX
         0 WRH$_OPTIMIZER_ENV_PK                    INDEX
        43 I_WRI$_OPTSTAT_OPR_STIME                 INDEX
       293 I_WRI$_OPTSTAT_SYNOPHEAD                 INDEX
       649 I_WRI$_OPTSTAT_IND_ST                    INDEX
       801 I_WRI$_OPTSTAT_IND_OBJ#_ST               INDEX
       978 I_WRI$_OPTSTAT_TAB_ST                    INDEX
     1,474 I_WRI$_OPTSTAT_TAB_OBJ#_ST               INDEX
     4,968 I_WRI$_OPTSTAT_HH_ST                     INDEX
     6,807 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST            INDEX
   129,509 I_WRI$_OPTSTAT_H_ST                      INDEX
   192,304 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST           INDEX

That index dropped to 13Gb – needless to say I continued with the rest

         MB SEGMENT_NAME                             SEGMENT
---------- ---------------------------------------- ------
         0 WRH$_OPTIMIZER_ENV_PK                    INDEX
         0 I_WRI$_OPTSTAT_AUX_ST                    INDEX
         0 I_WRI$_OPTSTAT_SYNOPPARTGRP              INDEX
         0 WRH$_PLAN_OPTION_NAME_PK                 INDEX
         2 I_WRI$_OPTSTAT_OPR_STIME                 INDEX
        39 I_WRI$_OPTSTAT_IND_ST                    INDEX
        47 I_WRI$_OPTSTAT_IND_OBJ#_ST               INDEX
        56 I_WRI$_OPTSTAT_TAB_ST                    INDEX
        72 I_WRI$_OPTSTAT_TAB_OBJ#_ST               INDEX
       200 I_WRI$_OPTSTAT_SYNOPHEAD                 INDEX
       482 I_WRI$_OPTSTAT_HH_ST                     INDEX
       642 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST            INDEX
     9,159 I_WRI$_OPTSTAT_H_ST                      INDEX
    12,560 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST           INDEX

An overall saving of 300Gb

Item                     Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/OPTSTAT                         214.12 SYS

 

I wasn’t too hopeful that this would benefit the purge routine because in the trace file above you can see that it is not using an index.

exec dbms_stats.purge_stats(sysdate-14);
Elapsed: 01:47:17.15

Better but not that much so I rebuilt the tables using a move command and parallel 8 and then rebuilt the indexes again. I intentionally left the table and indexes with the parallel setting that it acquires from the move/rebuild just for testing purposes.

select 'alter table '||segment_name||' move tablespace SYSAUX parallel 8;' from dba_segments where tablespace_name = 'SYSAUX' and segment_name like '%OPT%' and segment_type='TABLE'

 

alter table WRI$_OPTSTAT_TAB_HISTORY move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_IND_HISTORY move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_AUX_HISTORY move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_OPR move tablespace SYSAUX parallel 8;
alter table WRH$_OPTIMIZER_ENV move tablespace SYSAUX parallel 8;
alter table WRH$_PLAN_OPTION_NAME move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_SYNOPSIS_PARTGRP move tablespace SYSAUX parallel 8;
alter table WRI$_OPTSTAT_SYNOPSIS_HEAD$ move tablespace SYSAUX parallel 8;

Then rebuilt the indexes as above

Now we see the space benefits. Space down to 182Gb from 527Gb – 345Gb saved


 

Item                      Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/OPTSTAT                         182.61 SYS

The first purge with no data to purge was instantaneous

SQL> exec dbms_stats.purge_stats(sysdate-14);
Elapsed: 00:00:00.91

 

The second which was purging a days worth only took 9 minutes

SQL> exec dbms_stats.purge_stats(sysdate-13);
Elapsed: 00:09:40.65

I then put the tables and indexes back to a degree of 1 and ran a purge of another days worth of data which showed a consistent timing

SQL> exec dbms_stats.purge_stats(sysdate-12);
Elapsed: 00:09:32.05

It is clear from the number of posts on managing optimizer stats and their history that maintenance and management is required and I hope my set of posts help readers to do just that.

The following website has some very good information on it http://maccon.wikidot.com/sysaux-stats and I must credit my colleague Andy Challen  for generating some good ideas and providing a couple of scripts. Talking through a plan of action does seem to be a good way of working.

 


12.1.0.2 enhancement – large trace file produced when cursor becomes obsolete

$
0
0

A minor change came in with 12.1.0.2 which is causing large trace files to be produced under certain conditions.

The traces are produced as a result of an enhancement introduced in an unpublished bug.

The aim of the bug is to improve cursor sharing diagnostics by dumping information about an obsolete parent cursor and it’s child cursors after the parent cursor has been obsoleted N times.
A parent cursor will be marked as obsolete after a certain number of child cursors have been produced under that parent as defined by the parameter “_cursor_obsolete_threshold”. In 12.1, the default is 1024 child cursors.

A portion of the trace file is shown below – we are only seeing it in a 12.1.0.2  OEM database at the moment and the problem with obsoleted parent cursors is not affecting us in a noticeable manner, although the size of the trace files and their frequency is.

----- Cursor Obsoletion Dump sql_id=a1kj6vkgvv3ns -----
Parent cursor obsoleted 1 time(s). maxchild=1024 basephd=0xc4720a8f0 phd=0xc4720a8f0

 

The SQL being:

SELECT TP.PROPERTY_NAME, TP.PROPERTY_VALUE FROM MGMT_TARGET_PROPERTIES TP, MGMT_TARGETS T WHERE TP.TARGET_GUID = T.TARGET_GUID AND T.TARGET_NAME = :B2 AND T.TARGET_TYPE = :B1

 

The ‘feature’ is used to help Oracle support track issues with cursor sharing.

There is a parameter which can be used to stop or reduce the frequency of traces from MoS note 1955319.1

The dump can be controlled by the parameter “_kks_obsolete_dump_threshold” that can have a value between 0 and 8.

When the value is equal to 0, the obsolete cursor dump will be disabled completely:

alter system set "_kks_obsolete_dump_threshold" = 0;

When set to a value N between 1 and 8, the cursor will be dumped after cursor has been obsoleted N times:
By default, a parent cursor that is obsoleted is dumped when the parent SQL is obsoleted first time, i.e., N is 1.

alter system set "_kks_obsolete_dump_threshold" = 8;

 

The work around is to set the below underscore parameter that controls when the cursor will be dumped with a range of 0 to 8, 0 being disabled and 8 being after 8 iterations of being made obsolete etc.

We will be raising a support call re the cursor problem but probably setting the dump threshold to 8 at first and then 0 if we still keep on getting large traces. The default is set to 1.


Viewing all 189 articles
Browse latest View live