Which Oracle Release are you using?

Post Date: August 2018!

Recently an awesome Oracle Guru friend of mine questioned someone who was installing 11.2.0.4 with the word “seriously”, which is think shows that Oracle staff sometimes don’t live in the same technological world as the rest of business.

My response was:

11.2.0.4 is normal. In the real world:

– large corps mostly use old versions
– consultants look at current versions
– Oracle staff look at unreleased versions

I have known instances of Oracle staff blogging about how a feature works when, in the officially released versions, it didn’t. It only worked that way in a version which was released some months later. There was no reference to the release and the fact that there was a significant functional change between releases (but I suppose that’s a blog and not “official” documentation – the official documentation said nothing at all about how that particular feature worked. Nothing! So thank you mystery blogger.)

Anyway, the point of this post was I then did a small twitter poll to my most excellent and cosy band of followers to see what Oracle releases people were using. I asked 2 questions (because twitter is limited) and here’s the results:

oracle_version_highest

So more people have some form of 12 in the DB, but only 7% have 18 in Production. This at a time when most Oracle staff are thinking about Oracle 20 and 21, as Oracle 19 is done and just awaiting release. Think about that, Oracle… Whenever I am at a presentation by an Oracle PM, I think “wow – I might be able to use those new features in 2-5 years”.

oracle_version_lowest

So very few people have 12.x as their lowest version (which would include 18 as that’s really 12.2.0.2) and MORE have 9, 8 or 7 as their major headache! Yes – there are more on 9, 8 and 7 than are using 18 in Production. Lets say that once more. There are more on 9, 8 and 7 than are using 18 in Production

So why upgrade? Very few databases take advantage of all of the latest sexy features. I suspect that many of the applications still being produced could run on Oracle 7.3.4. – more so as the proliferation of ORM’s like Hibernate has left a generation of developers with little appreciation of the database and how to take advantage of it**. So why upgrade? These days? Security. Patches. Support. Without those 3 things, you are living on hope, hope that nothing goes wrong as you’ll struggle to find anyone to fix it – including Oracle. Hoping that nobody tries to hack your 8.1.7 database as it’s a Swiss Cheese of vulnerabilities, like all 7, 8, 8i, 9i, 10G DB’s. Not that we hear about systems being compromised every day on the news.

Anecdote #1: By coincidence I was talking to a client at about the same time and whilst they are a mostly 12.1 shop, they still had an old 8i database hanging around… as usual it was going to  be “retired soon” (which in my experience means sometime in the next 15-20 years) and wasn’t worth the time and effort to be upgraded or even do a business case to upgrade it!

**Anecdote #2: At a client a few years ago, an excellent Java Developer asked me to put an index on a flag column. I pointed out that with only 3 values that an index wouldn’t help, and as this was OLTP a bitmap index wasn’t appropriate due to concurrency issues. He said that with 3 values indexed, his query would be 3 times faster! We sat down and I explained some database fundamentals to him, at which point he said “don’t put an index on there – that would be a stupid idea”. A few weeks later he came back over and asked about SQL queries “I’m trying to aggregate this data – can the database help?”. I spent 30 minutes showing him in-line views and windowing analytic functions and we wrote the code he needed for his output. “Wow! You have just saved me 3 days of Java coding…” – he was going to pull everything into Java and process it there, so as well as 3 days of coding, we also saved the SAN, the network and a whole bunch of CPU by dealing with data at the database layer – which is always the most efficient place to deal with it!

Advertisement

Online Index Rebuild Problem in 12C/18C/19C

When building an index in Oracle Enterprise edition using the keyword “ONLINE”, if the index build fails for whatever reason a messy set of extents can get left behind.

In Oracle 12.1 [update still there in 12.2 and 18C and 19.6] there’s a job (located in DBA_SCHEDULER_JOBS) to help with this: “SYS”.”CLEANUP_ONLINE_IND_BUILD” which ticks away, undocumented, in the background cleaning up “rogue” extents once an hour, every hour (start time based upon job creation time)However, if I run a CREATE INDEX … ONLINE command on a table and the index creation happens to coincide with the invocation of the cleanup job I get the following error in my alert log

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl1:
*************************************************************************
2017-05-12 05:28:37.191123 -04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_j000_12235.trc:
ORA-12012: error on auto execute of job "SYS"."CLEANUP_ONLINE_IND_BUILD"
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
ORA-06512: at "SYS.DBMS_PDB", line 76
ORA-06512: at line 4

Oh no! The job has failed to clean up the extents that I was currently using. Phew!

The CREATE INDEX … ONLINE command completed with success a few minutes later.

A few minutes after that, an update statement failed with:

ORA-08102 index key not found for obj# 107102 file 16 block 1171234

Yes, I have a corrupt index! index_fadeMy assumption is that the cleanup job had removed all of the extents which had been allocated before the cleanup job started. The index create statement did not notice as the current extent removal attempt blocked (hence the job error) and future to-be created extents were not affected. However, most of the index had been “cleaned up”. #sadface

Solution: re-create the index and make sure the index creation is complete before your hourly job kicks in. Better still, disable the undocumented hourly job… make sure get the full backing of an Oracle Support SR first!

Next Step: Raise SR with Oracle about this!

Version Check 12/11/2018:

select banner_full from v$version

BANNER_FULL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
Oracle Database 18c EE High Perf Release 18.0.0.0.0 - Production
Version 18.1.0.0.0


21:17:19 SYS @ ORCL > select owner,job_name,start_date,repeat_interval,state from dba_scheduler_jobs where job_name = 'CLEANUP_ONLINE_IND_BUILD';

OWNER JOB_NAME START_DATE REPEAT_INTERVAL STATE
------------------------------ ------------------------------ ----------------------------------- ------------------------------ ---------------
SYS CLEANUP_ONLINE_IND_BUILD 07-FEB-18 08.13.44.415343 PM +00:00 FREQ = HOURLY; INTERVAL = 1 SCHEDULED

 

 

MOS Solution Note 2280374.1 is not helpful:

Solution

Rebuild the index at a time other than the time frame when “SYS”.”CLEANUP_ONLINE_IND_BUILD” is running as auto job. #facepalm #whatifittakesover1hourtobuild

 

OEM and monitoring the -MGMTDB GMIR Database

When you create Grid Infrastructure in 12.1.0.2, you are presented with a new (annoyingly named) “-MGMTDB”

This database is a standard, 12C CDB/PDB storing transient performance and other information (Grid Infrastructure Management Repository). If it is lost, no biggie. Just re-create it (in your voting disk DG. Aside: create a new MGMTDB_DG, move your voting disk there, re-create -MGMTDB, then move your voting disk back out to the proper multiple voting volumes.)

However, Oracle Enterprise Manager Cloud Control 12.1.0.5.0 and earlier sees this oracle database, PDB, listener and all, and decides to discover it. This is incorrect and should not happen. It is supposed to be “masked off” from OEM. Monitoring this database system will only lead to false positives and problems where none really exist, and all of the targets should be ignored (as per the attached picture)

OEM-MGMTDB

In a future release of Oracle Enterprise Manager Cloud Control, these targets will no longer be discovered and will automatically remain hidden from view within OEM, once the team have fixed the bug which – I was very reliably informed – they discovered the root cause of today.

Adding a DEFAULT column in 12C

I was at a talk recently, and there was an update by Jason Arneil about adding columns to tables with DEFAULT values in Oracle 12C. The NOT NULL restriction has been lifted and now Oracle cleverly intercepts the null value and replaces it with the DEFAULT meta-data without storing it in the table. To repeat the 11G experiment I ran recently:

 

SQL> alter table ncha.tab1 add (filler_default char(1000) default 'EXPAND' not null);
Table altered.

SQL> select table_name,num_rows,blocks,avg_space,avg_row_len 
      from user_tables where table_name = 'TAB1';
TABLE_NAME NUM_ROWS       BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1            10000       1504          0        2017


In both releases we then issue:
SQL> alter table ncha.tab1 modify (filler_default null);
Table altered.


IN 11G
SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
      from user_tables where table_name = 'TAB1';

TABLE_NAME NUM_ROWS       BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1            10000       3394          0        2017

BUT IN 12C
SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
      from user_tables where table_name = 'TAB1';
TABLE_NAME NUM_ROWS       BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1            10000       1504          0        2017

So, as we can see, making the column NULLABLE in 12C didn’t cause it to go through and update every row in the way it must in 11G. It’s still a chained-row update accident waiting to happen, but its a more flexible accident 🙂

However, I think it’s worth pointing out that you only get “free data storage” when you add the column. When inserting a record, simply having a column with a DEFAULT value means that the DEFAULT gets physically stored with the record if it is not specified. The meta-data effect is ONLY for subsequently added columns with DEFAULT values.

SQL> create table ncha.tab1 (pk number, c2 timestamp, filler char(1000), filler2 char(1000) DEFAULT 'FILLER2' NOT NULL) pctfree 1;
Table created.

SQL> alter table ncha.tab1 add constraint tab1_pk primary key (pk);
Table altered.

Insert 10,000 rows into the table, but not into FILLER2 with the DEFAULT
SQL> insert into ncha.tab1 (pk, c2, filler) select rownum id, sysdate, 'A' from dual connect by level <= 10000;
commit;
Commit complete.

Gather some stats and have a look after loading the table. Check for chained rows at the same time.
SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100);
PL/SQL procedure successfully completed.

SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
     from user_tables where table_name = 'TAB1';

TABLE_NAME   NUM_ROWS	  BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1		10000	    3394	  0	   2017

For a bit of fun, I thought I would see just how weird the stats might look if I played around with adding defaults

SQL> drop table ncha.tab1;
Table dropped.

SQL> create table ncha.tab1 (pk number) pctfree 1;
Table created.

SQL> alter table ncha.tab1 add constraint tab1_pk primary key (pk);
Table altered.

Insert 10,000 rows into the table

SQL> insert into ncha.tab1 (pk) select rownum id from dual connect by level <= 10000;
commit;
Commit complete.

Gather some stats and have a look after loading the table. Check for chained rows at the same time.
SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100);

PL/SQL procedure successfully completed.

SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
  2    from user_tables
  3   where table_name = 'TAB1';

TABLE_NAME   NUM_ROWS	  BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1		10000	      20	  0	      4

Now lets add a lot of defaults
SQL> alter table ncha.tab1 add (filler_1 char(2000) default 'F1' not null, filler_2 char(2000) default 'F2' null, filler_3 char(2000) default 'F3', filler_4 char(2000) default 'how big?' null );
Table altered.

Gather some stats and have a look after adding the column. Check for chained rows at the same time.
SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100);

PL/SQL procedure successfully completed.

SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
  2    from user_tables
  3   where table_name = 'TAB1';

TABLE_NAME   NUM_ROWS	  BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1		10000	      20	  0	   8008

10,000 rows with an AVG_ROW_LEN of 8008, all in 20 blocks. Magic!

Just to finish off, lets update each DEFAULT column so the table expands….

SQL> select filler_1, filler_2, filler_3, filler_4,count(*) from ncha.tab1 group by filler_1,filler_2,filler_3,filler_4;

FILLER_1   FILLER_2   FILLER_3	 FILLER_4     COUNT(*)
---------- ---------- ---------- ---------- ----------
F1	   F2	      F3	 how big?	 10000

So it's all there. The metadata is intercepting the nulls and converting them to the default on the fly, rather than storing them in the blocks.
So what happens if we actually UPDATE the table?

SQL> update ncha.tab1 set filler_1 = 'EXPAND', filler_2 = 'EXPAND', filler_3='EXPAND', filler_4='THIS BIG!';
10000 rows updated.

SQL> select filler_1, filler_2, filler_3, filler_4,count(*) from ncha.tab1 group by filler_1,filler_2,filler_3,filler_4;

FILLER_1   FILLER_2   FILLER_3	 FILLER_4     COUNT(*)
---------- ---------- ---------- ---------- ----------
EXPAND	   EXPAND     EXPAND	 THIS BIG!	 10000

Gather some stats and have a look after the update, checking for chained rows at the same time.
SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100);

PL/SQL procedure successfully completed.

SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
     from user_tables where table_name = 'TAB1';

TABLE_NAME   NUM_ROWS	  BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1		10000	   19277	  0	   8010

SQL> 
SQL> analyze table tab1 list chained rows into chained_rows;

Table analyzed.

SQL> select count(*) CHAINED_ROWS from chained_rows;

CHAINED_ROWS
------------
       10000

Yep. That’s bigger.