Oracle Statistics Gathering Timeout

January 2024 – Oracle 12C, 19C, 21C, 23C+

Gathering object statistic in Oracle is important. The optimizer needs metadata about the object, such as the amount of rows and number of distinct values in a column, to help it decide the optimum way to access your data. This takes effort, and takes place automatically in scheduled windows, both overnight or at the weekend. These windows have a limited duration, and if there’s a lot of data to analyse to get the requisite metadata, it may run out of time.

Photo of a large table by David Vives on Unsplash

Large tables, with billions of rows, tend to be partitioned. Oracle invented INCREMENTAL statistics gathering to minimise the amount of time which it needs to spend gathering statistics by ignoring partitions which have not changed.

If you have a large partitioned table, it is frequently a good idea to switch on INCREMENTAL statistics. However, the actual behaviours are not always obvious when doing this and the following message formed part of an online help request: “the manual gather(which is incremental stats) on those partition table runs longer as it tries to gather on all the partitions which are not even modified. How should we tackle this scenario“.

To switch on INCREMENTAL stats gathering:

EXEC dbms_stats.set_table_prefs('<schema>','<table>','INCREMENTAL','TRUE')

After the first stats gather, the gather job should only gather stats for partitions which have changed which will minimise the gather time. Global statistics are calculated from the statistics associated with each partition (not subpartition), in conjunction with some stored SYNOPSES to calculate global distinct column values associated with each partition.

Check the stats gathering regularly

Stats gathering fails silently. You need to check it yourself, regularly! To do this query DBA_AUTOTASK_JOB_HISTORY where the JOB_STATUS is “STOPPED”. If you see the JOB_INFO message “Stop job called because associated window was closed”, you should investigate as you ran out of time on the stats gathering. Usually the autotask will catch up at the weekend, when the default window is much longer than the 4 hour weekday window, but you need to understand if old stats are acceptable to your database (some will be, some may not). Importantly, the weekend window may fail too, and then you’re not getting new stats at all.

2 strategies for speeding up the stats gather are to gather stats in parallel, or to gather stats concurrently. Both of these will increase the CPU required by the database to gather the stats, so this needs to be balanced with the workload during the stats gather windows to ensure you are not negatively impacting the application.

Check out the options:

exec DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','AUTOMATIC')
(gathers several tables at the same time using multiple jobs)

exec DBMS_STATS.SET_TABLE_PREFS('<schema>', '<table>', 'DEGREE', n)
(where n is the degree of parallel required, or DBMS_STATS.AUTO_DEGREE)


WARNING: If you are doing ANY PARALLEL processing you really should use Resource Manager to control it

Unexpected Gotchas

Here’s a few gotcha’s you should know if using INCREMENTAL stats gathering:

Problem
Partitions which are not labelled as “stale” might actually be stale. The modification of a single row in a partition will cause the stats to re-gather for that partition. You cannot rely on the ALL_TAB_STATISTICS.STALE_STATS status, which will still show as “NO” under these circumstances.

Solution :
tell the incremental stats gather to use the tables “stale percent” (default 10%, modifiable with table_pref ‘STALE_PERCENT’) before it sees a partition as stale, rather than *any* change:

exec dbms_stats.set_table_prefs
('<schema>','<table>','INCREMENTAL_STALENESS','USE_STALE_PERCENT')


Problem
if the stats on a partition are locked, and a single row in that partition is modified, INCREMENTAL is effectively (silently) disabled as Oracle can’t trust the stats for a global aggregation and you revert to normal “full table” stats gathering.

Solution
tell the stats gather to use stale stats on a locked partition (regardless of the amount of DML) as if it was not locked (and combine with the previous solution):

exec dbms_stats.set_table_prefs
('schema','table','INCREMENTAL_STALENESS','USE_STALE_PERCENT,USE_LOCKED_STATS';


Problem
a new histogram appears due to a SQL having a previously unused predicate (column in the WHERE clause) and the METHOD_OPT is default (for all columns size auto), Oracle may create a new histogram automatically.

Solution
check which predicates are candidates for a histogram (in SYS.COL_USAGE$) and explicitly control your histograms. This does mean that you don’t get new histograms automatically, which might not be your intention, but you will only get the histograms that you ask for.

exec dbms_stats.set_table_prefs('<schema>','<table>','METHOD_OPT',
  'FOR ALL COLUMNS SIZE 1
   FOR COLUMNS SIZE 256  colA,colQ
   FOR COLUMNS SIZE 2000 colB,colZ');


Problem
there are job(s) gathering stats, as well as the auto task, and the job is passing in parameters which are causing the issue (rather than relying on table or global “prefs” for consistency)

Solution
Consider ignoring any command line parameters for stats gather jobs. In this way, every stats gather will only use the preferences and will be consistent across each gather

dbms_stats.set_global_prefs(‘PREFERENCE_OVERRIDES_PARAMETER‘,’TRUE’);
or
dbms_stats.set_table_prefs(‘<schema>’,'<table>’,’PREFERENCE_OVERRIDES_PARAMETER‘,’TRUE’);


To validate what is going on, you can always (carefully!) trace the stats gather, either at a table level or globally using the UNDOCUMENTED preference “trace”. Be aware that you may get a lot of trace file output and a potential performance hit on the stats gather job too.

exec dbms_stats.set_global_prefs('trace',65532);

Finally, when you switch on INCREMENTAL statistics, Oracle needs to store additional SYNOPSES data to allow it to calculate the number of distinct values in a column more easily. This can take a lot of space in SYSAUX, although this has been significantly improved in Oracle 12.2 onwards with the synopses tables occupying around 3-5% of their previous large footprint. For more information about SYNOPSES, Oracle Support maintains a note with a query: 1953961.1 and for more information about INCREMENTAL stats in general, the Oracle Optimizer PM Nigel Bayliss has some excellent blog posts from 2020 here

Exadata System Statistics

Following on from last weeks Oracle Optimizer System Statistics post, I though it worthwhile adding a note about gathering system statistics in Exadata mode.

exec dbms_stats.gather_system_statistics('EXADATA');

So what is this, and why would you chose to do it? First of all, these are not workload system statistics (which I believe you should never gather without extraordinary proof of their effectiveness – all Oracle official documenation and MOS notes about workload stats should point to this blog post now, explaining why: https://blogs.oracle.com/optimizer/should-you-gather-system-statistics). Workload stats measure a number of I/O and CPU metrics. Exadata stats are a little bit of measurement (IO Transfer Speed – IOTFRSPEED) and the hard-setting of the Multi-block read count (MBRC). How are these set?

https://unsplash.com/photos/HwNCyLWw7hw

The IOTFRSPEED is the rate at which an Oracle database can read data in a single read request. It is measured and set (subject to bug 17501565.8 not getting in the way – Pre 12.1- and leaving it to default). The default is 4096, but after measuring and setting it will increase significantly. For the worked example below, lets say the speed was measures at 200,000 (about 200MB, which is a good number for an exadata)

The MBRC is set by copying in the db_file_multiblock_read_count in from the initialization parameters. By default this is not set explicitly and relies upon 2 hidden parameters. If your block size is 8k, it will probably be 128, and if your block size is 16k it will probably be 64. If your block size is any other size, I hope you have a good tested and proven reason for your edge-case configuration.

Each of these setting will change the balance the optimizer will use between single block reads (to read index blocks, and to read table rows by rowid), and performing full table scans. Higher MBRC and IOTFRSPEED’s mean table scans become more attractive to the optimizer. This means they will occur more frequently. What does an Exadata do very well? Offload table scans to storage cells.

DEFAULT, 8K Block Size
MBRC=8 (db_file_multblock_read is not set, so use _db_file_optimizer_read_count)
IOTFRSPEED=4096
Calculated Ratio 0.271, or 3.69 multiblocks for every single block.
DEFAULT, 16K Block Size
MBRC=8 (db_file_multblock_read is not set, so use _db_file_optimizer_read_count)
IOTFRSPEED=4096
Calculated Ratio 0.375, or 2.67 multiblocks for every single block.

16k block sizes mean indexes are slightly more likely to be used as we get less blocks per scan.
DEFAULT, 8K Block Size, db_file_multblock_read is set explicitly
MBRC=128 (db_file_multblock_read is explicitly set, so use explicit 128)
IOTFRSPEED=4096
Calculated Ratio 0.173, or 5.77 multiblocks for every single block.

Tablescans are a little more attractive than default. You get 5.77 blocks for every multiblock read instead of 3.69.
EXADATA GATHERED, 8K Block Size, db_file_multblock_read NOT set explicitly
MBRC=128 (db_file_multblock_read is not set, so set it to the value of _db_file_exec_read_count)
IOTFRSPEED=200000 - this was measured and set by the 'exadata' gather.
Calculated Ratio 0.012, or 84.32 multiblocks for every single block.

For an index to be used it is going to have to be much more efficient than table scan.
Pulling a few rows out of unique indexes will continue to work well but
any kind of predicate which returns a set of data will be pushed towards 
a FTS (or IFFS), and therefore the exadata will offload to the storage cells.

So how does this affect optimizer costing? Looking at a 10053 trace to see how the cost is changed:

I/O COST WITH DEFAULTS: 270,835
[10053] SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for COST_CHECK[COST_CHECK]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Table: COST_CHECK  Alias: COST_CHECK
    Card: Original: 1000000.000000  Rounded: 1000000  Computed: 1000000.000000  Non Adjusted:1000000.000000
  Scan IO  Cost (Disk) =   270835.000000
  ...
I/O COST EXADATA GATHERED AS ABOVE: 11,860
[10053] SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for COST_CHECK[COST_CHECK]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Table: COST_CHECK  Alias: COST_CHECK
    Card: Original: 1000000.000000  Rounded: 1000000  Computed: 1000000.000000  Non Adjusted:1000000.000000
  Scan IO  Cost (Disk) =   11860.000000
  ...

So, the I/O cost with “Exadata” system stats us reduces from 270,835 to 11,860. That’s quite a reduction.
Please bear in mind that this is only the Optimizer Cost calculation of a table scan, and not the reality of accessing the data. This is used to decide the access path.

By gathering ‘Exadata’ stats, you need to ensure you have the capacity on your exadata to offload the workload increase on the storage cells. For a classic “overnight load, daytime query, dedicated Exadata”, this is probably a good thing. Where the storage cells are already being worked hard, this will make things worse.

If you followed the advice of the Exadata salesman and marketing people and already removed your indexes, you should already be employing a performance expert to put (some of) the indexes back on…

How did I calculate the ratio’s above? Check the Oracle Optimizer System Statistics blog post from last week….

Oracle Table Prefs

V. Quick post for me really: Some SQL to tell me all of the Table Prefs for any specific table (All Oracle 19 prefs)
You could always just query DBA_TAB_STAT_PREFS

select * from user_tab_stat_prefs where table_name = '&&TABLE';

But that only tells you what you have explicitly set for a table, which will override any GLOBAL prefs. The order that prefs are used is TABLE -> GLOBAL -> DEFAULT. If some DBA has changed a global pref, how would you know that it’s affecting this particular table?

Here’s a quick piece of SQL to do that, which formats nice and readably (for me!)

undefine TABLE
prompt Enter Table Name 
prompt &&TABLE

select rpad('ANDV_ALGO_INTERNAL_OBSERVE : ',42)||dbms_stats.get_prefs(pname=>'ANDV_ALGO_INTERNAL_OBSERVE', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('APPROXIMATE_NDV : ',42)||dbms_stats.get_prefs(pname=>'APPROXIMATE_NDV', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('APPROXIMATE_NDV_ALGORITHM : ',42)||dbms_stats.get_prefs(pname=>'APPROXIMATE_NDV_ALGORITHM', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('AUTO_STAT_EXTENSIONS : ',42)||dbms_stats.get_prefs(pname=>'AUTO_STAT_EXTENSIONS', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('AUTOSTATS_TARGET : ',42)||dbms_stats.get_prefs(pname=>'AUTOSTATS_TARGET', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('AUTO_TASK_INTERVAL : ',42)||dbms_stats.get_prefs(pname=>'AUTO_TASK_INTERVAL', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('AUTO_TASK_MAX_RUN_TIME : ',42)||dbms_stats.get_prefs(pname=>'AUTO_TASK_MAX_RUN_TIME', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('AUTO_TASK_STATUS : ',42)||dbms_stats.get_prefs(pname=>'AUTO_TASK_STATUS', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('CASCADE : ',42)||dbms_stats.get_prefs(pname=>'CASCADE', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('CONCURRENT : ',42)||dbms_stats.get_prefs(pname=>'CONCURRENT', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('COORDINATOR_TRIGGER_SHARD : ',42)||dbms_stats.get_prefs(pname=>'COORDINATOR_TRIGGER_SHARD', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('DEBUG : ',42)||dbms_stats.get_prefs(pname=>'DEBUG', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('DEGREE : ',42)||dbms_stats.get_prefs(pname=>'DEGREE', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('ENABLE_HYBRID_HISTOGRAMS : ',42)||dbms_stats.get_prefs(pname=>'ENABLE_HYBRID_HISTOGRAMS', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('ENABLE_TOP_FREQ_HISTOGRAMS : ',42)||dbms_stats.get_prefs(pname=>'ENABLE_TOP_FREQ_HISTOGRAMS', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('ESTIMATE_PERCENT : ',42)||dbms_stats.get_prefs(pname=>'ESTIMATE_PERCENT', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('GATHER_AUTO : ',42)||dbms_stats.get_prefs(pname=>'GATHER_AUTO', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('GATHER_SCAN_RATE : ',42)||dbms_stats.get_prefs(pname=>'GATHER_SCAN_RATE', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('GLOBAL_TEMP_TABLE_STATS : ',42)||dbms_stats.get_prefs(pname=>'GLOBAL_TEMP_TABLE_STATS', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('GRANULARITY : ',42)||dbms_stats.get_prefs(pname=>'GRANULARITY', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('INCREMENTAL : ',42)||dbms_stats.get_prefs(pname=>'INCREMENTAL', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('INCREMENTAL_INTERNAL_CONTROL : ',42)||dbms_stats.get_prefs(pname=>'INCREMENTAL_INTERNAL_CONTROL', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('INCREMENTAL_LEVEL : ',42)||dbms_stats.get_prefs(pname=>'INCREMENTAL_LEVEL', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('INCREMENTAL_STALENESS : ',42)||dbms_stats.get_prefs(pname=>'INCREMENTAL_STALENESS', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('JOB_OVERHEAD : ',42)||dbms_stats.get_prefs(pname=>'JOB_OVERHEAD', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('JOB_OVERHEAD_PERC : ',42)||dbms_stats.get_prefs(pname=>'JOB_OVERHEAD_PERC', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('MAINTAIN_STATISTICS_STATUS : ',42)||dbms_stats.get_prefs(pname=>'MAINTAIN_STATISTICS_STATUS', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('METHOD_OPT : ',42)||dbms_stats.get_prefs(pname=>'METHOD_OPT', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('NO_INVALIDATE : ',42)||dbms_stats.get_prefs(pname=>'NO_INVALIDATE', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('OPTIONS : ',42)||dbms_stats.get_prefs(pname=>'OPTIONS', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('PREFERENCE_OVERRIDES_PARAMETER : ',42)||dbms_stats.get_prefs(pname=>'PREFERENCE_OVERRIDES_PARAMETER', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('PUBLISH : ',42)||dbms_stats.get_prefs(pname=>'PUBLISH', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('ROOT_TRIGGER_PDB : ',42)||dbms_stats.get_prefs(pname=>'ROOT_TRIGGER_PDB', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('SCAN_RATE : ',42)||dbms_stats.get_prefs(pname=>'SCAN_RATE', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('STALE_PERCENT : ',42)||dbms_stats.get_prefs(pname=>'STALE_PERCENT', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('STAT_CATEGORY : ',42)||dbms_stats.get_prefs(pname=>'STAT_CATEGORY', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('SYS_FLAGS : ',42)||dbms_stats.get_prefs(pname=>'SYS_FLAGS', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('TABLE_CACHED_BLOCKS : ',42)||dbms_stats.get_prefs(pname=>'TABLE_CACHED_BLOCKS', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('TRACE : ',42)||dbms_stats.get_prefs(pname=>'TRACE', tabname=>'&&TABLE') prefs_for__&&TABLE FROM dual UNION ALL
select rpad('WAIT_TIME_TO_UPDATE_STATS : ',42)||dbms_stats.get_prefs(pname=>'WAIT_TIME_TO_UPDATE_STATS', tabname=>'&&TABLE') prefs_for__&&TABLE
FROM dual;

Sample Output


PREFS_FOR__INTERVAL_TAB
------------------------
ANDV_ALGO_INTERNAL_OBSERVE :              FALSE
APPROXIMATE_NDV :                         TRUE
APPROXIMATE_NDV_ALGORITHM :               REPEAT OR HYPERLOGLOG
AUTO_STAT_EXTENSIONS :                    OFF
AUTOSTATS_TARGET :                        AUTO
AUTO_TASK_INTERVAL :                      900
AUTO_TASK_MAX_RUN_TIME :                  3600
AUTO_TASK_STATUS :                        OFF
CASCADE :                                 DBMS_STATS.AUTO_CASCADE
CONCURRENT :                              OFF
COORDINATOR_TRIGGER_SHARD :               FALSE
DEBUG :                                   0
DEGREE :                                  NULL
ENABLE_HYBRID_HISTOGRAMS :                3
ENABLE_TOP_FREQ_HISTOGRAMS :              3
ESTIMATE_PERCENT :                        DBMS_STATS.AUTO_SAMPLE_SIZE
GATHER_AUTO :                             AFTER_LOAD
GATHER_SCAN_RATE :                        HADOOP_ONLY
GLOBAL_TEMP_TABLE_STATS :                 SESSION
GRANULARITY :                             AUTO
INCREMENTAL :                             TRUE
INCREMENTAL_INTERNAL_CONTROL :            TRUE
INCREMENTAL_LEVEL :                       PARTITION
INCREMENTAL_STALENESS :                   USE_STALE_PERCENT,USE_LOCKED_STATS
JOB_OVERHEAD :                            -1
JOB_OVERHEAD_PERC :                       1
MAINTAIN_STATISTICS_STATUS :              FALSE
METHOD_OPT :                              FOR ALL COLUMNS SIZE 50
NO_INVALIDATE :                           DBMS_STATS.AUTO_INVALIDATE
OPTIONS :                                 GATHER
PREFERENCE_OVERRIDES_PARAMETER :          FALSE
PUBLISH :                                 TRUE
ROOT_TRIGGER_PDB :                        FALSE
SCAN_RATE :                               0
STALE_PERCENT :                           10
STAT_CATEGORY :                           OBJECT_STATS, REALTIME_STATS
SYS_FLAGS :                               1
TABLE_CACHED_BLOCKS :                     1
TRACE :                                   0
WAIT_TIME_TO_UPDATE_STATS :               15

Stats Collection Time Anomaly

Johnathan Lewis (@JLOracle) recently published a short post about Stats Collection Time, talking about the table dba_optstat_operation (and dba_optstat_operation_tasks ), which reminded me about (what I regard as) an anomaly in the output in the NOTES columns in Oracle 12C.

I won’t repeat why it’s useful to check these tables as Johnathans note and @MDWidlakes’s comment here should give you all you need to know.

The DBA_OPTSTAT_OPERATION.NOTES column contains the parameters passed into the DBMS_STATS command, so you know what was done. It also reports the DEFAULT used by the DBMS_STATS job. Well, it does if you call DBMS_STATS explicitly, but the standard overnight auto job just says “default”. Why doesn’t is expand on that the way the explicit call does? If the default was changed between runs, you may end up with very different results but with no indication why. Am I missing something?

The following 2 rows of data show the output from each run. Note that the DEFAULT for METHOD_OPT in this database has been changed from “FOR ALL COLUMNS SIZE AUTO” to “FOR ALL COLUMNS SIZE REPEAT”** but was not explicitly passed-in for either run.

DBMS_STATS.GATHER_SCHEMA_STATS – decodes the DEFAULTs

OPERATION : gather_schema_stats            
TARGET    : MYSCHEMA     
START_TIME: 15-SEP-16 07.04.47 
END_TIME  : 15-SEP-16 07.09.02 
STATUS    : COMPLETED                                   
JOB_NAME  : 
NOTES     : <params>
            <param name="block_sample" val="FALSE"/>
            <param name="cascade" val="NULL"/>
            <param name="concurrent" val="FALSE"/>
            <param name="degree" val="NULL"/>
            <param name="estimate_percent" val="DBMS_STATS.AUTO_SAMPLE_SIZE"/>
            <param name="force" val="FALSE"/>
            <param name="gather_fixed" val="FALSE"/>
            <param name="gather_temp" val="FALSE"/>
            <param name="granularity" val="AUTO"/>
            <param name="method_opt" val="FOR ALL COLUMNS SIZE REPEAT"/>
            <param name="no_invalidate" val="NULL"/>
            <param name="options" val="GATHER"/>
            <param name="ownname" val="MYSCHEMA"/>
            <param name="reporting_mode" val="FALSE"/>
            <param name="statid" val=""/>
            <param name="statown" val=""/>
            <param name="stattab" val=""/>
            <param name="stattype" val="DATA"/>
            </params>

Autotask Overnight Gather – doesn’t decode the DEFAULTs

OPERATION : gather_database_stats (auto)   
TARGET    : AUTO       
START_TIME: 15-SEP-16 22.01.20 
END_TIME  : 15-SEP-16 22.38.40 
STATUS    : COMPLETED            
JOB_NAME  : ORA$AT_OS_OPT_SY_1212  
NOTES     : <params>
            <param name="block_sample" val="FALSE"/>
            <param name="cascade" val="NULL"/>
            <param name="concurrent" val="FALSE"/>
            <param name="degree" val="DEFAULT_DEGREE_VALUE"/>
            <param name="estimate_percent" val="DEFAULT_ESTIMATE_PERCENT"/>
            <param name="granularity" val="DEFAULT_GRANULARITY"/>
            <param name="method_opt" val="DEFAULT_METHOD_OPT"/>
            <param name="no_invalidate" val="DBMS_STATS.AUTO_INVALIDATE"/>
            <param name="reporting_mode" val="FALSE"/>
            <param name="stattype" val="DATA"/>
            </params>

 

**You should control the histograms that you need to maintain your schema stats as you would like Oracle to see them. The Oracle default approach of lots of histograms can be costly to maintain and store, and any stats which use Adaptive Sampling (all prior to 12C, and Height-Balanced / Hybrid in 12C onwards) carry a risk, especially in OLTP systems.
FOR ALL COLUMNS SIZE REPEAT was useful until Oracle12, when a change in the algorithm makes it dangerous, and it should NOT be used.  You should be using GLOBAL_STATS_PREFS and TABLE_STATS_PREFS to control your stats.

 

Pre-creating Interval Partitions

One of the major problems with interval-based partitioning is the statistics. Partitions appear dynamically as-needed based upon data being inserted or udpated, and any partition magically appearing isn’t going to have any statistics.

This is generally a stability issue as you will, at best, be using dynamic statistics for your optimizations. So how do we deal with it? My preferred method is to pre-create the partitions and copy statistics from a good partition into the pre-created partition. But how do we get the partitions to appear? Here’s 2 options:

  1. Insert data into the row with the partition key for the partition we wish to create, and rollback. This can be tricky, especially with tables containing many NOT NULL columns, and is subject to failure based upon table evolution.
  2. Lock the relevant partition in shared mode using the commandLOCK TABLE .x. PARTITION FOR <partition-key> IN SHARE MODE;

    This will place a shared lock on the non-existant partition, which Oracle will create so it can lock it. A much less messy solution, and not one I had thought of until shown the light by Dan Jankowski.

So does option 2 work? Here’s a quick example:

10:51:55 NEIL @ ORCL01 > CREATE TABLE interval_table (id NUMBER, created_date DATE)
10:51:55   2             PARTITION BY RANGE (created_date) INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))
10:51:55   3           ( PARTITION part_01 values LESS THAN (TO_DATE('01-JAN-2015','DD-MON-YYYY')))
10:51:55   4  /
Table created.

10:51:55 NEIL @ ORCL01 > select table_name, partition_name,high_value from user_tab_partitions order by 1,2;
TABLE_NAME                     PARTITION_NAME                 HIGH_VALUE
------------------------------ ------------------------------ --------------------------------------------------------------------------------
INTERVAL_TABLE                 PART_01                        TO_DATE(' 2015-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA


use a shared lock to generate a new partition

10:51:55 NEIL @ ORCL01 > lock table interval_table partition for (to_date('01-JAN-2016','DD-MON-YYYY')) in share mode;
Table(s) Locked.

10:51:55 NEIL @ ORCL01 > select table_name, partition_name,high_value from user_tab_partitions order by 1,2;
TABLE_NAME                     PARTITION_NAME                 HIGH_VALUE
------------------------------ ------------------------------ --------------------------------------------------------------------------------
INTERVAL_TABLE                 PART_01                        TO_DATE(' 2015-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
INTERVAL_TABLE                 SYS_P647                       TO_DATE(' 2016-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

10:51:55 NEIL @ ORCL01 > lock table interval_table partition for (to_date('01-FEB-2016','DD-MON-YYYY')) in share mode;
Table(s) Locked.
10:51:55 NEIL @ ORCL01 > lock table interval_table partition for (to_date('01-MAR-2016','DD-MON-YYYY')) in share mode;
Table(s) Locked.
10:51:55 NEIL @ ORCL01 > lock table interval_table partition for (to_date('01-APR-2016','DD-MON-YYYY')) in share mode;
Table(s) Locked.
10:51:55 NEIL @ ORCL01 > select table_name, partition_name,high_value from user_tab_partitions order by 1,2;
TABLE_NAME                     PARTITION_NAME                 HIGH_VALUE
------------------------------ ------------------------------ --------------------------------------------------------------------------------
INTERVAL_TABLE                 PART_01                        TO_DATE(' 2015-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
INTERVAL_TABLE                 SYS_P647                       TO_DATE(' 2016-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
INTERVAL_TABLE                 SYS_P648                       TO_DATE(' 2016-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
INTERVAL_TABLE                 SYS_P649                       TO_DATE(' 2016-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
INTERVAL_TABLE                 SYS_P650                       TO_DATE(' 2016-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

10:51:55 NEIL @ ORCL01 > -- and release the locks... the rollback doesn't rollback the creation of the new partitions.
10:51:55 NEIL @ ORCL01 > rollback;
Rollback complete.

So now what? To get the stats right I follow the following rule set:

Firstly, lock the table stats when you create the table and have a dedicated, focused stats job. This stop the general stats job from getting in the way of this this important partitioned table.

  • Create a new partition
  • Copy stats using DBMS_STATS.COPY_TABLE_STATS from your fullest partition (with FORCE=TRUE to override the locked stats option). Always try to pretend you have more data than you really have if faking it – it’s safer as an optimized plan with larger expected data sets processing a smaller data set tends to work much better than the stats assuming a small data set and being surprised by lots of data. Consider using SCALE_FACTOR if you have a growing dataset. Don’t reply upon Optimizer magic, such as Adaptive Execution Plans to dig out of a hole.
  • Once a partition becomes “old” (like a no-longer used date-range partition), gather actual stats to get all your stats for that partition accurate. That may even become your new baseline stats copy partition. You will possibly never need to gather stats again for that partition.
  • Be careful if you are copying stats when you have an object-level change. For example, if you put a new index on, you need to re-copy stats post change to any pre-created partitions. Empty pre-created partitions will have accurate (i.e. empty) stats and that’s really not what you want.
  • Repeat as often as you pre-create a new partition