Quantcast
Channel: Oracle DBA – A lifelong learning experience
Viewing all articles
Browse latest Browse all 189

Progress update on my AWR repository

$
0
0

I received an email from a team member yesterday

Have you seen this? https://docs.oracle.com/database/121/TDPPT/tdppt_awr_warehouse.htm

Interesting idea – ever thought of implementing it?

Well of course I have implemented a AWR data repository and I thought I would catch-up on where I am with it and how it is being used.

The database started out on HPUX on 11.2.0.1 about 2 years ago and is now on Linux on 11.2.0.3. The repository now holds a total of 139 DBIDs and we have data going back 400 days for the majority of them. The storage is over 2Tb, of which the SYSAUX tablespace contains about 95% – we are compressing some data but that is an area I need to look into further

Why 400 days you might ask – well I work in a Retail organisation and as Easter is a movable feast (pun intended) and that period allows to provide data covering Easter whenever it might fall. It is possible that we would not have data for Easter if we only kept a 365 day period and Easter is a key trading period, second only to Xmas.

I suppose the obvious question is how we have used the data and was it all worth the effort.

The repository has been used for workload forecasting, security reviews, auditing, performance reviews and validation of our existing systems.

Workload forecasting is an interesting and unexpected benefit of the repository. When I originally proposed the idea it wasn’t something that had occurred to me but it has proved very useful. The process is to review how much work a system has done a year ago, look at current usage, calculate the difference and then use that to project forward. For the system that I am discussing which looks at how long it takes us to process sales data from stores we have used a combination of metrics including db time, disk read times, both sequential and scattered to produce a magic number which is what we are working against. That forecast is now being mapped against actuals and the forecast is proving quite accurate. That has allowed us to prepare the database server to support increased volumes by changing disk, adding CPU and memory to maintain and improve against last year’s performance with increased data capture and store numbers.

An often unseen benefit of AWR data is that it contains a lot of information about what a user has done and which queries they have run. Whilst the overhead of enabling auditing against every SIUD statement on every statement might be too much for many systems and probably totally unnecessary for most systems the AWR repository does do a good job of capturing a lot of such data and it can be retrieved quite easily using the ASH data. I am looking at the new tools in 12c which reduces the overhead of writing transactions to the AUD$ table – Google 12C Unified Auditing for more information on that.

Another tool we are using is to take specific sql statements that we are interested in and look at how they have performed over a long period of time – has performance deteriorated, is there evidence that we need to run more housekeeping or maybe re-write the sql to use a different plan?

One idea I had was to review our warehouse (physical, not DW) performance against each other. We have a number of distribution centres around the country which all run the setup and do exactly the same thing each day, log pallets of stock in and out basically. Therefore the same sql statements are run in every environment and we should see the same performance at every site. If we can analyse certain statements and see a deviation in one site from the rest then we might be able to attribute that to a localised network, disk or database performance issue.

We also store all our performance testing output in the repository and one area that I am very keen on pushing is the review of PT data against actuals once new code has gone live. We do look at that now but I think we could get more value out of the data we have.

I will implement a 12c repository to put any new 12c systems onto but as that is not backward compatible it will be a few years before we have migrated all production to 12c. I have had some discussions with Oracle people about how we are using our repository and the problems we have had and the 12c option is good way forward and the ability to use OEM to run comparisons between systems will be beneficial. Reading MoS Note 1907335.1): it appears that we can run an 11.2.0.4 AWR repository (with the correct patching) but I would assume that we could not store 12c data in there.

Overall the benefits gained have been well worth the effort invested



Viewing all articles
Browse latest Browse all 189

Trending Articles