Exadata System Statistics

Following on from last weeks Oracle Optimizer System Statistics post, I though it worthwhile adding a note about gathering system statistics in Exadata mode.

exec dbms_stats.gather_system_statistics('EXADATA');

So what is this, and why would you chose to do it? First of all, these are not workload system statistics (which I believe you should never gather without extraordinary proof of their effectiveness – all Oracle official documenation and MOS notes about workload stats should point to this blog post now, explaining why: https://blogs.oracle.com/optimizer/should-you-gather-system-statistics). Workload stats measure a number of I/O and CPU metrics. Exadata stats are a little bit of measurement (IO Transfer Speed – IOTFRSPEED) and the hard-setting of the Multi-block read count (MBRC). How are these set?

https://unsplash.com/photos/HwNCyLWw7hw

The IOTFRSPEED is the rate at which an Oracle database can read data in a single read request. It is measured and set (subject to bug 17501565.8 not getting in the way – Pre 12.1- and leaving it to default). The default is 4096, but after measuring and setting it will increase significantly. For the worked example below, lets say the speed was measures at 200,000 (about 200MB, which is a good number for an exadata)

The MBRC is set by copying in the db_file_multiblock_read_count in from the initialization parameters. By default this is not set explicitly and relies upon 2 hidden parameters. If your block size is 8k, it will probably be 128, and if your block size is 16k it will probably be 64. If your block size is any other size, I hope you have a good tested and proven reason for your edge-case configuration.

Each of these setting will change the balance the optimizer will use between single block reads (to read index blocks, and to read table rows by rowid), and performing full table scans. Higher MBRC and IOTFRSPEED’s mean table scans become more attractive to the optimizer. This means they will occur more frequently. What does an Exadata do very well? Offload table scans to storage cells.

DEFAULT, 8K Block Size
MBRC=8 (db_file_multblock_read is not set, so use _db_file_optimizer_read_count)
IOTFRSPEED=4096
Calculated Ratio 0.271, or 3.69 multiblocks for every single block.
DEFAULT, 16K Block Size
MBRC=8 (db_file_multblock_read is not set, so use _db_file_optimizer_read_count)
IOTFRSPEED=4096
Calculated Ratio 0.375, or 2.67 multiblocks for every single block.

16k block sizes mean indexes are slightly more likely to be used as we get less blocks per scan.
DEFAULT, 8K Block Size, db_file_multblock_read is set explicitly
MBRC=128 (db_file_multblock_read is explicitly set, so use explicit 128)
IOTFRSPEED=4096
Calculated Ratio 0.173, or 5.77 multiblocks for every single block.

Tablescans are a little more attractive than default. You get 5.77 blocks for every multiblock read instead of 3.69.
EXADATA GATHERED, 8K Block Size, db_file_multblock_read NOT set explicitly
MBRC=128 (db_file_multblock_read is not set, so set it to the value of _db_file_exec_read_count)
IOTFRSPEED=200000 - this was measured and set by the 'exadata' gather.
Calculated Ratio 0.012, or 84.32 multiblocks for every single block.

For an index to be used it is going to have to be much more efficient than table scan.
Pulling a few rows out of unique indexes will continue to work well but
any kind of predicate which returns a set of data will be pushed towards 
a FTS (or IFFS), and therefore the exadata will offload to the storage cells.

So how does this affect optimizer costing? Looking at a 10053 trace to see how the cost is changed:

I/O COST WITH DEFAULTS: 270,835
[10053] SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for COST_CHECK[COST_CHECK]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Table: COST_CHECK  Alias: COST_CHECK
    Card: Original: 1000000.000000  Rounded: 1000000  Computed: 1000000.000000  Non Adjusted:1000000.000000
  Scan IO  Cost (Disk) =   270835.000000
  ...
I/O COST EXADATA GATHERED AS ABOVE: 11,860
[10053] SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for COST_CHECK[COST_CHECK]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Table: COST_CHECK  Alias: COST_CHECK
    Card: Original: 1000000.000000  Rounded: 1000000  Computed: 1000000.000000  Non Adjusted:1000000.000000
  Scan IO  Cost (Disk) =   11860.000000
  ...

So, the I/O cost with “Exadata” system stats us reduces from 270,835 to 11,860. That’s quite a reduction.
Please bear in mind that this is only the Optimizer Cost calculation of a table scan, and not the reality of accessing the data. This is used to decide the access path.

By gathering ‘Exadata’ stats, you need to ensure you have the capacity on your exadata to offload the workload increase on the storage cells. For a classic “overnight load, daytime query, dedicated Exadata”, this is probably a good thing. Where the storage cells are already being worked hard, this will make things worse.

If you followed the advice of the Exadata salesman and marketing people and already removed your indexes, you should already be employing a performance expert to put (some of) the indexes back on…

How did I calculate the ratio’s above? Check the Oracle Optimizer System Statistics blog post from last week….

Advertisement

Oracle Optimizer System Statistics

System Statistics Are a Little Complex


Oh System statistics! Should we gather them? Should we not?
What do they mean? What are they doing to the optimizer?

[tl;dr – in almost all cases, DON’T GATHER SYSTEM STATISTICS. DON’T SET MBRC. LET THEM DEFAULT!]

First you need to be armed with a piece of information. When Oracle optimizes your SQL, it produces a COST to compare each SQL execution plan. Exactly what is this COST? It’s not seconds. It’s not quite the amount of I/O required. It’s not an arbitrary figure either. Oracle Optimizer Cost is the cost of getting your data in terms of SINGLE BLOCK READS. Remember this for later.

Lets have a look at the system stats defaults :

SELECT sname,pname,pval1 FROM SYS.AUX_STATS$ ORDER BY 1,2;
SNAME                PNAME                               PVAL1
-------------------- ------------------------------ ----------
SYSSTATS_INFO        DSTART
SYSSTATS_INFO        DSTOP
SYSSTATS_INFO        FLAGS                                   0
SYSSTATS_INFO        STATUS
SYSSTATS_MAIN        CPUSPEED
SYSSTATS_MAIN        CPUSPEEDNW                           2911 (this will vary)
SYSSTATS_MAIN        IOSEEKTIM                              10
SYSSTATS_MAIN        IOTFRSPEED                           4096
SYSSTATS_MAIN        MAXTHR
SYSSTATS_MAIN        MBRC
SYSSTATS_MAIN        MREADTIM
SYSSTATS_MAIN        SLAVETHR
SYSSTATS_MAIN        SREADTIM

What we are looking for here is the the 3 metrics highlighted in colour. As we can see, the are not set. By default, we need to calculate those, or know where to find the defaults for those values to get to them. Once we have those 3 metrics, we can calculate the RATIO used by the optimizer to convert MULTIBLOCK READ into a cost metric of SINGLE BLOCK READS.

The CPU speed is used to calculate the COST of the query in terms of CPU required. This is small percentage of the query cost but it may help decide which plan is chosen. This is also converted from CPU cost into a cost metric of SINGLE BLOCK READS.

Lets get the values for MBRC, MREADTIM and SREADTIM.

MBRC is easy. If it is not explicitly set, it uses the init.ora parameter “db_file_multiblock_read_count” and uses that. However, by default (and also my recommendation) this should not be set, meaning Oracle will use the hidden parameter “_db_file_optimizer_read_count” to cost your queries. This defaults to 8. [note: this is not the value used in execution. Oracle attempts to do 1MB reads, and uses the value in “_db_file_exec_read_count” to control the multiblock reads at execution time. For an 8K block size, this is set to 128.

SREADTIM and MREADTIM are calculations based upon information we now have:
SREADTIM = IOSEEKTIM + db_block_size        / IOTFRSPEED = 10+(8192 /4096) = 12
MREADTIM = IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED = 10+(8192*8/4096) = 26

Right! Now we have more information and can calculate a ratio. The multi block cost-per-block. This will allow us to take the number of blocks in (for example) a table scan [DBA_TAB_STATISTICS.BLOCKS statistic] and covert it to the cost metric os SINGLE BLOCK READS, meaning we can compare (for example) a FULL TABLE SCAN’s I/O directly with the I/O required for an Index Range Scan and Table Lookup.

multi-block cost-per-block = 1/MBRC * MREADTIM/SREADTIM = 1/8 * 26/12 = 0.270833

If we pull some data from a 10053 trace, where I have a table scan of a table containing 1,000,000 blocks, we should see the 1,000,000 blocks being converted to 270,833 “single block read blocks” for COST purposes.

[10053] SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for COST_CHECK[COST_CHECK]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Table: COST_CHECK  Alias: COST_CHECK
    Card: Original: 1000000.000000  Rounded: 1000000  Computed: 1000000.000000  Non Adjusted:1000000.000000
  Scan IO  Cost (Disk) =   270835.000000
  Scan CPU Cost (Disk) =   7411440000.000001
  Total Scan IO  Cost  =   270835.000000 (scan (Disk))
                       =   270835.000000
  Total Scan CPU  Cost =   7411440000.000001 (scan (Disk))
                       =   7411440000.000001
  Access Path: TableScan
    Cost:  271041.492812  Resp: 271041.492812  Degree: 0
      Cost_io: 270835.000000  Cost_cpu: 7411440000
      Resp_io: 270835.000000  Resp_cpu: 7411440000
  Best:: AccessPath: TableScan
         Cost: 271041.492812  Degree: 1  Resp: 271041.492812  Card: 1000000.000000  Bytes: 0.000000

Well we were close! There’s a fudge factor in play here, but 270,835 is pretty much 270,833 🙂

We can also work out the CPU element of the COST.
Scan IO  Cost (Disk) =   270835.000000
Cost: 271041.492812
so CPU cost must be 271041.492812 – 270835.000000 = 206.492812

Scan CPU Cost (Disk)=7411440000.000001 so doing the maths to convert to units of “single block read time”…

CPUSPEEDNW = 2,991 (Mhz, but we convert 2,991,000 Khz)

= 7,411,440,000 / 2,991,000 / SREADTIM
=  ‭2,477.913741223671013039 / 12
‭= 206.49281176863925108659311267135‬ cost of CPU in units of SREADTIM

So now you can see how your system statistics can fundamentally change the optimizer by changing the ratio of multiblock-reads to single block reads.

You want to play yourself and see what happens? Of course you do…

Here’s an excel spreadsheet where you can plug in your numbers and see what happens to the ratio between single and multi block reads to see how system stats influence the optimizer. By plugging in different numbers, you will see how complex the interactions are – so what’s the ideal numbers? Use the DEFAULTS in almost all cases** [WARNING – don’t go changing your system stats based on this statement. You might experience a significant amount of plan changes, some of which may be very very bad!]:


Also, this is my version of a script by Franck Pachot, and also based on work by Chris Antognini, which you can run against your system:

select pname,pval1,calculated,formula from sys.aux_stats$ where sname='SYSSTATS_MAIN'
model
reference sga on (
select 'Database Buffers' name,sum(bytes) value from v$sgastat where name in ('shared_io_pool','buffer_cache')
) dimension by (name) measures(value)
reference parameter on (
select name,decode(type,3,to_number(value)) value from v$parameter where name='db_file_multiblock_read_count' and ismodified!='FALSE'
union all
select name,decode(type,3,to_number(value)) value from v$parameter where name='sessions'
union all
select name,decode(type,3,to_number(value)) value from v$parameter where name='db_block_size'
union all
SELECT a.ksppinm name, to_number(b.ksppstvl) value FROM x$ksppi a, x$ksppsv b WHERE a.indx=b.indx AND ksppinm like '_db_file_optimizer_read_count'
)
dimension by (name) measures(value)
partition by (sname) dimension by (pname) measures (pval1,pval2,cast(null as number) as calculated,cast(null as varchar2(120)) as formula) rules(
calculated['MBRC']=coalesce(pval1['MBRC'],parameter.value['db_file_multiblock_read_count'],parameter.value['_db_file_optimizer_read_count'],8),
calculated['MREADTIM']=coalesce(pval1['MREADTIM'],pval1['IOSEEKTIM'] + (parameter.value['db_block_size'] * calculated['MBRC'] ) / pval1['IOTFRSPEED']),
calculated['SREADTIM']=coalesce(pval1['SREADTIM'],pval1['IOSEEKTIM'] + parameter.value['db_block_size'] / pval1['IOTFRSPEED']),
calculated['_multi block Cost per block']=round(1/calculated['MBRC']*calculated['MREADTIM']/calculated['SREADTIM'],4),
calculated['_single block Cost per block']=1,
formula['MBRC']=case when pval1['MBRC'] is not null then 'MBRC' when parameter.value['db_file_multiblock_read_count'] is not null then 'db_file_multiblock_read_count' when parameter.value['_db_file_optimizer_read_count'] is not null then '_db_file_optimizer_read_count (db_file_multiblock_read_count not set, which is good!)' else '= not sure so used 8' end,
formula['MREADTIM']=case when pval1['MREADTIM'] is null then '= IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED' end||' = '||pval1['IOSEEKTIM']||'+('||parameter.value['db_block_size']||'*'||calculated['MBRC']||'/'||pval1['IOTFRSPEED']||')',
formula['SREADTIM']=case when pval1['SREADTIM'] is null then '= IOSEEKTIM + db_block_size / IOTFRSPEED' end||' = '||pval1['IOSEEKTIM']||'+('||parameter.value['db_block_size']||'/'||pval1['IOTFRSPEED']||')',
formula['_multi block Cost per block']='= 1/MBRC * MREADTIM/SREADTIM = 1/'||calculated['MBRC']||' * '||calculated['MREADTIM']||'/'||calculated['SREADTIM'],
calculated['_maximum mbrc']=sga.value['Database Buffers']/(parameter.value['db_block_size']*parameter.value['sessions']),
formula['_maximum mbrc']='= buffer cache blocks/sessions (small cache limiter) = ' || sga.value['Database Buffers']/parameter.value['db_block_size']||'/'||parameter.value['sessions'],
formula['_single block Cost per block']='relative to the multi blovk cost per block. Always 1!',
formula['CPUSPEED']='overrides CPUSPEEDNW when set',
formula['CPUSPEEDNW']='CPU speed Mhz - non workload',
formula['IOSEEKTIM']='IO seek time in ms',
formula['IOTFRSPEED']='IO transfer speed in KB/s',
formula['MAXTHR']='Maximum IO system throughput',
formula['SLAVETHR']='average parallel slave IO throughput'
) order by 1;

** there is a case for gathering ‘exadata’ system stats. Increasing the IOTRFSPEED to 200,000 and changing the MBRC to (probably) 128 or 64 will *really* change the ratios, forcing a lot of full table scans down onto the storage cells, instead of using mediocre indexes. This should be considered (and thoroughly tested) if you have a DW on a dedicated Exadata. From a little more on exadata stats, check this blog post