Call for Papers – UKOUG 2017 SIGs


The time has come for me to plan the upcoming 2017 UKOUG Special Interest Groups.

I am chairman of the RAC, Cloud, Infrastructure and Availability (RAC-CIA) SIG and I’m after presentations for the 3 joint SIGs we will be putting on with the RDBMS SIG, plus the stand-alone SIG will we will having in the autumn.

A SIG is a single day one-or-two stream conference which we take around the UK to make it as accessible for UKOUG members and non-members alike (NOTE: there is a charge for non-members)

This year we are looking at running:

Thursday 23rd February – RDBMS + RAC, London 2 streams
Thursday 27th April – TBC: Northern Day – RDBMS + RAC + ? , Manchester (probably)
Thursday 15th June – 12.2 Themed event – TVP Reading – 2 stream event RDBMS + RAC
Wednesday 27th September – RAC-CIA SIG, London 1 stream

If you are interested in speaking, you can submit an abstract here:

or you can email me directly :

Be quick! The next event is in 10 weeks time…

UKOUG Tech 16


It is the UKOUG Tech16 (and App16 & JDE 16) conference in a short few days.

As well as giving 2 presentations myself, the first about Disaster Recovery, Far Sync and Fast Start Failover and the second about “RAC for beginners”, I will also be arguing in the DBA v Dev round table at the end of Monday about what I would like to do with most Developers and the best way for me to achieve it without negative consequences.

Martin Widlake and I will be MC-ing the Optimizer round table, with a panel of experts consisting of Jonathan Lewis, Maria Colgan, Nigel Bayliss and Chris Antognini answering your questions (send questions in advance to ), talk to me on the day, or stick your hand in the air in the session.

If that panel doesn’t get you to persuade your management that you need to be at this conference, perhaps the rest of the speakers at the conference will.

We at the UKOUG work long and hard to persuade the best speaker in the world to turn up and talk to us. For Tech 16, I have managed to gather

  • Jonathan Lewis
  • Maria Colgan
  • Connor McDonald
  • Graham Wood
  • James Morle
  • Martin Bach
  • Pete Finigan
  • Chris Antognini
  • Kamil Stawiarski
  • Dom Giles
  • Lothar Flatz
  • Mike Dietrich
  • Uwe Hesse
  • Tim Hall
  • Ludo Caldara
  • Frits Hoogland
  • Martin Widlake
  • 60 ACEs! 60! In Tech alone….. so many awesome guys and girls to teach us stuff. My only sadness is that, in EVERY talk slot there is more than 2 talk that I want to see (yep – at least 3 per hour) and I can’t see them all ūüė¶


OTN Appreciation Day : FlashBack Query

I love OTN, and I love Flashback query. It is a feature looking for an application use-case, but it really can dig you out of a major problem, recovering data you have recently deleted.

Here’s a silly example of the power:

select * from important_data

NAME                 PHONE_NUMBER
-------------------- ------------------------------
Partner A            555-1234
Partner B            555-5678

The choice is made:
update important_data set name = 'Spouse' where name = 'Partner B'

1 row updated.

And the redundant data removed:
delete from important_data where name = 'Partner A'

1 row deleted.

And all is good.
select * from important_data

NAME                 PHONE_NUMBER
-------------------- ------------------------------
Spouse               555-5678

Wait 7... years?
host sleep 7

And the data is missing something?

NAME                 PHONE_NUMBER
-------------------- ------------------------------
Spouse               555-5678

Perhaps we need to reconstruct the deleted data using Flashback
What did the table look like?

select * from important_data AS OF TIMESTAMP (SYSTIMESTAMP - interval '7' second)

NAME                 PHONE_NUMBER
-------------------- ------------------------------
Partner A            555-1234
Partner B            555-5678

And we can use Flashback to re-create the data.
insert into important_data (name,phone_number)
(select 'Hell-eau...', phone_number from important_data AS OF TIMESTAMP (SYSTIMESTAMP - interval '7' second) where name = 'Partner A')

1 row created.

And the data is correct again.

NAME                 PHONE_NUMBER
-------------------- ------------------------------
Hell-eau...          555-1234
Spouse               555-5678


Stats Collection Time Anomaly

Johnathan Lewis (@JLOracle) recently published a short post about Stats Collection Time, talking about the table dba_optstat_operation (and dba_optstat_operation_tasks ), which reminded me about (what I regard as) an anomaly in the output in the NOTES columns in Oracle 12C.

I won’t repeat why it’s useful to check these tables as Johnathans note and @MDWidlakes’s comment here should give you all you need to know.

The DBA_OPTSTAT_OPERATION.NOTES column contains the parameters passed into the DBMS_STATS command, so you know what was done. It also reports the DEFAULT used by the DBMS_STATS job. Well, it does if you call DBMS_STATS explicitly, but the standard overnight auto job just says “default”. Why doesn’t is expand on that the way the explicit call does? If the default was changed between runs, you may end up with very different results but with no indication why. Am I missing something?

The following 2 rows of data show the output from each run. Note that the DEFAULT for METHOD_OPT in this database has been changed from “FOR ALL COLUMNS SIZE AUTO” to “FOR ALL COLUMNS SIZE REPEAT”** but was not explicitly passed-in for either run.


OPERATION : gather_schema_stats            
START_TIME: 15-SEP-16 07.04.47 
END_TIME  : 15-SEP-16 07.09.02 
STATUS    : COMPLETED                                   
NOTES     : <params>
            <param name="block_sample" val="FALSE"/>
            <param name="cascade" val="NULL"/>
            <param name="concurrent" val="FALSE"/>
            <param name="degree" val="NULL"/>
            <param name="estimate_percent" val="DBMS_STATS.AUTO_SAMPLE_SIZE"/>
            <param name="force" val="FALSE"/>
            <param name="gather_fixed" val="FALSE"/>
            <param name="gather_temp" val="FALSE"/>
            <param name="granularity" val="AUTO"/>
            <param name="method_opt" val="FOR ALL COLUMNS SIZE REPEAT"/>
            <param name="no_invalidate" val="NULL"/>
            <param name="options" val="GATHER"/>
            <param name="ownname" val="MYSCHEMA"/>
            <param name="reporting_mode" val="FALSE"/>
            <param name="statid" val=""/>
            <param name="statown" val=""/>
            <param name="stattab" val=""/>
            <param name="stattype" val="DATA"/>

Autotask Overnight Gather – doesn’t decode the DEFAULTs

OPERATION : gather_database_stats (auto)   
TARGET    : AUTO       
START_TIME: 15-SEP-16 22.01.20 
END_TIME  : 15-SEP-16 22.38.40 
STATUS    : COMPLETED            
NOTES     : <params>
            <param name="block_sample" val="FALSE"/>
            <param name="cascade" val="NULL"/>
            <param name="concurrent" val="FALSE"/>
            <param name="degree" val="DEFAULT_DEGREE_VALUE"/>
            <param name="estimate_percent" val="DEFAULT_ESTIMATE_PERCENT"/>
            <param name="granularity" val="DEFAULT_GRANULARITY"/>
            <param name="method_opt" val="DEFAULT_METHOD_OPT"/>
            <param name="no_invalidate" val="DBMS_STATS.AUTO_INVALIDATE"/>
            <param name="reporting_mode" val="FALSE"/>
            <param name="stattype" val="DATA"/>




**as it should be in EVERY ORACLE DATABASE EVER from the start, to allow you to control the histograms that you need and need to maintain on your schema. The Oracle default approach of “everything is skewed, thousands of histograms everywhere please” is particularly painful for OLTP databases using Bind Variable. I’m sure some of Oracles Adaptive Optimization is to work around the bad things that happen under this particular scenario.


DOAG Conference + Exhibition

Another quick post to let you know I will be presenting at the German Oracle User Group Conference [DOAG], which runs from 14th-18th of November 2016

This is a huge conference and exhibition over 4 days with many DOAG_2016tracks and a vast array of great speakers.

I hope to see you in Nuremberg! It will be good. Oh, yes. Es wird gut sein


DOAG Konferenz + Ausstellung 2016

POUG 2-day event

Just a quick post to let you know I will be speaking, with a lot of other Oracle ACE and other excellent speakers at the Pint with/Polish Oracle User Group (Chapter IV), taking place on 7th/8th October in Warsaw. Click on Buy Ticket link to see some of the other speakers, or look at the images below.

Hope to see you there! Should be fabulous.



Accessing STATUS columns efficiently

A frequently reoccuring design problem with relational databases is the issue locating unprocessed rows in a large table, so we know which rows of data are still yet to be processed.

The problem with a STATUS column is that it generally has low cardinality; there are probably only a handful of distinct values [(C)omplete, (E)rror, (U)nprocessed or something like that]. Most records will be (C)omplete. This makes STATUS a poor candidate for standard B-Tree indexation. In a high throughput OLTP database, using bitmap indexes is probably not an option due to concurrency.

[Aside: When coding flag columns in Oracle, ALWAYS use a VARCHAR2(1 CHAR) {or CHAR(1 CHAR) if you prefer, but a CHAR is a VARCHAR2 under the covers and occupies the same number of bytes}. This is in preferance to a NUMBER(1). which occupies more bytes for a “1” than a “0”, so when you update it, you run the risk of row migration, chained rows and a performance hit. Frequently, ORM’s like Hibernate code for NUMBER by default. Override this!]

So what are my options? There’s a short list of possible table accesses for a low cardinality column.

1. Table scan. In an OLTP database where you only want a tiny fraction of the rows in the table, this would be a bad chouce.
2. Index the accessed columns and accept the inevitable INDEX_SCAN or FAST_FULL_INDEX_SCAN. This is not great and you probably need a Histogram on the column to convince the optimizer to use the index for your low frequency values. Otherwise you may be back to the table scan.
3. Make the “Complete” status “NULL”.
4. Uses a function-based index which makes the Complete status seems to be NULL for a specific query.

So what’s with options 3 and 4, why are they good, and how do we use them?

Unlike some RBDMS’s, Oracle does not store NULL values in it’s simple (non-composite) b-tree indexes. Therefore, if you choose Option (3) and make your “Complete” status be represented by a NULL, you will maintain an index on STATUS in which the only values that are stored are values you are interested in. This makes the index very sexy to the optimizer as it will generally be very tiny. However, we face one small problem. Convincing Developers that having a NULL as a valid status can be difficult. A NULL is a non-representative value. It is not supposed to represent anything. It means “I don’t know”. It doesn’t behave the same an normal values. This tends to freak out Developers and designers sometimes.

That’s where Option 4 comes in. If we wrap the index definition in a CASE statement, to produce a function-based index, we have have a highly specific tailored index on our table. If the SQL predicate matches the query exactly, we get a serious performance payoff.

But don’t take my word for it. Here’s a worked example from my laptop:

Here’s the table, it’s data distribution (16m rows, and a handful we care about)

NEIL @ ORCL01 > desc test_table
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ID                            NOT NULL NUMBER
 STATUS                        NOT NULL VARCHAR2(1 CHAR)
 DESCRIPTION                   NOT NULL VARCHAR2(100 CHAR)

NEIL @ ORCL01 > select status,count(*) from test_table group by status

S   COUNT(*)
- ----------
E         16
C   16777216
Y         32

Here are the indexes on the table, and their sizes. As you can see, the function-based index is absolutely tiny, making it as attractive to storage admins as it is to the optimizer.

- alter table test_table add constraint test_table_pk primary key (id);
- create index test_table_CASE on test_table (case status when 'Y' then status else null end);
- create index test_table_COVER_COMP on test_table (status, id) compress 1;
- create index test_table_STATUS on test_table (status) compress 1;

NEIL @ ORCL01 > select segment_name,segment_type,sum(bytes/1024) kb from user_extents 
where segment_name like 'TEST_TABLE%' 
group by segment_type,segment_name order by 2 desc,1;

SEGMENT_NAME               SEGMENT_TYPE               KB
-------------------------- ------------------ ----------
TEST_TABLE                 TABLE                  555008
TEST_TABLE_CASE            INDEX                      64
TEST_TABLE_COVER_COMP      INDEX                  658432
TEST_TABLE_PK              INDEX                  319488
TEST_TABLE_STATUS          INDEX                  413696

Some Index stats:
------------------------- ------------- ----------------------- ----------------------- ----------------- -------- ---------- ----------- ---------
TEST_TABLE_CASE                       1                       1                       6                 6 VALID            32          32 21-FEB-16
TEST_TABLE_COVER_COMP          16748149                       1                       1            125447 VALID      16748149      234974 21-FEB-16
TEST_TABLE_PK                  17003239                       1                       1             91391 VALID      17003239      492287 21-FEB-16
TEST_TABLE_STATUS                     3                   13828                   32011             96034 VALID      16257590      363295 21-FEB-16

Where we have a choice of useful indexes, we get a FAST FULL SCAN with a hefty cost. A histogram could have given us an index RANGE SCAN, which can be very good.
With no Histogram:

select id from test_table where status = 'Y';

Plan hash value: 1140618830

| Id  | Operation            | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT     |                       |       |       | 18753 (100)|          |
|*  1 |  INDEX FAST FULL SCAN| TEST_TABLE_COVER_COMP |  5592K|    42M| 18753   (1)| 00:00:01 |

With a histogram in place on STATUS, you get a much better plan as the covering index avoids the need for the table look-up. You also get the risk that the optimizer may have bind variable peeking issues and other complications should we have lots of table joins.

select id from test_table where status = 'Y'

Plan hash value: 2912582684

| Id  | Operation        | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT |                       |       |       |     3 (100)|          |
|*  1 |  INDEX RANGE SCAN| TEST_TABLE_COVER_COMP |    32 |   256 |     3   (0)| 00:00:01 |

NOTE: Ditching the covering index and just using the index on STATUS is pretty efficient too when combined with a histogram:

select id from test_table where status = 'Y'

Plan hash value: 2416598805

| Id  | Operation                           | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                    |                   |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TEST_TABLE        |    32 |   256 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | TEST_TABLE_STATUS |    32 |       |     3   (0)| 00:00:01 |

And now with the function-based index; having the case statement removing all statuses we are not interested-in for a tiny tidy index.

NOTE: The Predicate in the query must EXACTLY match the function-based index for it to be used.

select id from test_table where case status when 'Y' then status else null end = 'Y'

Plan hash value: 2073004851

| Id  | Operation                           | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                    |                 |       |       |     7 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TEST_TABLE      |    32 |   256 |     7   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | TEST_TABLE_CASE |    32 |       |     1   (0)| 00:00:01 |

Conclusion: For a highly skewed STATUS column you need a histogram, which is something you should mostly avoid in OLTP systems using BIND variables. Having a highly focussed function-based index allows for a tiny self-maintaining index which is guaranteed to only be used for queries that you want it to be used for.

NOTE: The original idea behind using NULLS to minimise index size came from the performance expert, Jonathan Lewis. I have implemented both NULL-as-complete design and case-based indexes at several clients, in varying forms, and always to great success.

%d bloggers like this: