Turning rows into columns

Penshaw Monument
Turning Moo’s into Columns?

It’s odd how some things pass you by in new Oracle releases. Today I discovered “sys.odcinumberlist” and “sys.odcivarchar2list“.

I was wanting to generate a pre-defined list for a join to ensure each value was represented, and came up with the new in Oracle 12 function “sys.odcinumberlist”, so not new! Well, there’s 2 straightforward functions: “sys.odcinumberlist” and “sys.odcivarchar2list”.

How are they used?

SELECT column_value AS number_list
FROM sys.odcinumberlist(1,2,5,7,9,5,2,-34,5,43,2,0)
ORDER BY number_list;

NUMBER_LIST
-----------
        -34
          0
          1
          2
          2
          2
          5
          5
          5
          7
          9
         43

12 rows selected.

Note how I have had to alias the "column value" virtual column to allow mne to sort the list
SELECT column_value AS char_list FROM sys.odcivarchar2list('A','V','DD');

CHAR_LIST
---------
A
V
DD

Nice, simple, with a limit of 32768 items in the list. So how was I using it? I was checking to see which profiles existed in a database (the list was somewhat longer than this, as was the SQL, but this is good for a demo) so I generated my full list of known profiles to join to the existing profiles in each database.

WITH my_profiles AS (
    SELECT
        column_value AS profile_name
    FROM
        sys.odcivarchar2list(
            'DEFAULT'
          , 'SOME_OTHER_PROFILE' --etc etc etc 
        )
)
SELECT
    mp.profile_name
  , nvl( profile , '<does-not-exist>') AS profile
  , resource_name
  , limit
FROM
    my_profiles  mp
    LEFT OUTER JOIN dba_profiles dp ON ( mp.profile_name = dp.profile )
ORDER BY
    profile
  , resource_name;

PROFILE_NAME         PROFILE              RESOURCE_NAME                  LIMIT
-------------------- -------------------- ------------------------------ -----------
SOME_OTHER_PROFILE   <does-not-exist>
DEFAULT              DEFAULT              COMPOSITE_LIMIT                UNLIMITED
DEFAULT              DEFAULT              CONNECT_TIME                   UNLIMITED
DEFAULT              DEFAULT              CPU_PER_CALL                   UNLIMITED
DEFAULT              DEFAULT              CPU_PER_SESSION                UNLIMITED
DEFAULT              DEFAULT              FAILED_LOGIN_ATTEMPTS          10
DEFAULT              DEFAULT              IDLE_TIME                      UNLIMITED
DEFAULT              DEFAULT              INACTIVE_ACCOUNT_TIME          UNLIMITED
DEFAULT              DEFAULT              LOGICAL_READS_PER_CALL         UNLIMITED
DEFAULT              DEFAULT              LOGICAL_READS_PER_SESSION      UNLIMITED
DEFAULT              DEFAULT              PASSWORD_GRACE_TIME            7
DEFAULT              DEFAULT              PASSWORD_LIFE_TIME             180
DEFAULT              DEFAULT              PASSWORD_LOCK_TIME             1
DEFAULT              DEFAULT              PASSWORD_REUSE_MAX             UNLIMITED
DEFAULT              DEFAULT              PASSWORD_REUSE_TIME            UNLIMITED
DEFAULT              DEFAULT              PASSWORD_ROLLOVER_TIME         -1
DEFAULT              DEFAULT              PASSWORD_VERIFY_FUNCTION       NULL
DEFAULT              DEFAULT              PRIVATE_SGA                    UNLIMITED
DEFAULT              DEFAULT              SESSIONS_PER_USER              UNLIMITED

There’s a number of other ways to do this, including using the UNPIVOT function, creating your own table type, etc, but this was the nice and simple option I was looking for.

Advertisement

Oracle Open World Presentation: REST Enabled Neural Networks

Hi! I’m currently at Oracle Open World 2018 in San Francisco and I’ve so I thought I would provide a copy of my slide deck for the presentations this morning.

If you are interested in how to provide good Neural Networks Machine Learning to developers in a simple to ingest REST interface, this is the place for you!

All of the URL’s are live, running in the Oracle Cloud, at the time of writing; including the somewhat open AutoREST interface to a simple data set. Enjoy!

The PDF is only a few meg:

1 – Code1-Nnets_REST-joint-ver2

rest_enabled_presentation_splash

Online Index Rebuild Problem in 12C/18C/19C

When building an index in Oracle Enterprise edition using the keyword “ONLINE”, if the index build fails for whatever reason a messy set of extents can get left behind.

In Oracle 12.1 [update still there in 12.2 and 18C and 19.6] there’s a job (located in DBA_SCHEDULER_JOBS) to help with this: “SYS”.”CLEANUP_ONLINE_IND_BUILD” which ticks away, undocumented, in the background cleaning up “rogue” extents once an hour, every hour (start time based upon job creation time)However, if I run a CREATE INDEX … ONLINE command on a table and the index creation happens to coincide with the invocation of the cleanup job I get the following error in my alert log

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl1:
*************************************************************************
2017-05-12 05:28:37.191123 -04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_j000_12235.trc:
ORA-12012: error on auto execute of job "SYS"."CLEANUP_ONLINE_IND_BUILD"
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
ORA-06512: at "SYS.DBMS_PDB", line 76
ORA-06512: at line 4

Oh no! The job has failed to clean up the extents that I was currently using. Phew!

The CREATE INDEX … ONLINE command completed with success a few minutes later.

A few minutes after that, an update statement failed with:

ORA-08102 index key not found for obj# 107102 file 16 block 1171234

Yes, I have a corrupt index! index_fadeMy assumption is that the cleanup job had removed all of the extents which had been allocated before the cleanup job started. The index create statement did not notice as the current extent removal attempt blocked (hence the job error) and future to-be created extents were not affected. However, most of the index had been “cleaned up”. #sadface

Solution: re-create the index and make sure the index creation is complete before your hourly job kicks in. Better still, disable the undocumented hourly job… make sure get the full backing of an Oracle Support SR first!

Next Step: Raise SR with Oracle about this!

Version Check 12/11/2018:

select banner_full from v$version

BANNER_FULL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
Oracle Database 18c EE High Perf Release 18.0.0.0.0 - Production
Version 18.1.0.0.0


21:17:19 SYS @ ORCL > select owner,job_name,start_date,repeat_interval,state from dba_scheduler_jobs where job_name = 'CLEANUP_ONLINE_IND_BUILD';

OWNER JOB_NAME START_DATE REPEAT_INTERVAL STATE
------------------------------ ------------------------------ ----------------------------------- ------------------------------ ---------------
SYS CLEANUP_ONLINE_IND_BUILD 07-FEB-18 08.13.44.415343 PM +00:00 FREQ = HOURLY; INTERVAL = 1 SCHEDULED

 

 

MOS Solution Note 2280374.1 is not helpful:

Solution

Rebuild the index at a time other than the time frame when “SYS”.”CLEANUP_ONLINE_IND_BUILD” is running as auto job. #facepalm #whatifittakesover1hourtobuild

 

Inserting data in SQL*Plus correctly

When inserting data into the database, it is occasionally forgotten (especially by English-speakers) that we need to take steps to ensure we are inserting data correctly and without unexpected character translation.

For example, in SQL*Plus we need to ensure we set the NLS_LANG environment variable to the correct setting for our database before we initiate SQL*Plus.

Here’s a quick example showing what can go wrong:

[oracle@ORA122 ~]$ echo $NLS_LANG

[oracle@ORA122 ~]$ sqlplus neil/neil
SQL*Plus: Release 12.2.0.1.0 Production on Thu Dec 22 13:56:23 2016
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Last Successful login time: Thu Dec 22 2016 13:46:58 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

NEIL @ pdb1 > create table lang (col1 varchar2(50 CHAR));
Table created.

NEIL @ pdb1 > insert into lang values (q'{J'ai cassé l'insert}')
  2  /
1 row created.

NEIL @ pdb1 > commit;
Commit complete.

NEIL @ pdb1 > select * from lang;

COL1
--------------------------------------------------
J'ai cass? l'insert

As we can see, the accented ” é ” has been lost in translation somewhere. This is not good.

If we set NLS_LANG correctly and repeat the insert, we get a different result:

[oracle@ORA122 ~]$ export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
[oracle@ORA122 ~]$ sqlplus neil/neil
SQL*Plus: Release 12.2.0.1.0 Production on Thu Dec 22 13:56:23 2016
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Last Successful login time: Thu Dec 22 2016 13:46:58 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

NEIL @ pdb1 > insert into lang values (q'{J'ai cassé l'insert}')
  2  /
1 row created.

NEIL @ pdb1 > commit;
Commit complete.

NEIL @ pdb1 > select * from lang;

COL1
--------------------------------------------------------------------------------
J'ai cass� l'insert
J'ai cassé l'insert

Note how the original insert is now returning even more troublesome nonsense that it was previously!

So, how do we know what to set the NLS_LANG variable to avoid this? The answer is partly in the database.

NEIL @ pdb1 > select * from v$nls_parameters 
              where parameter in ('NLS_LANGUAGE','NLS_TERRITORY','NLS_CHARACTERSET');

PARAMETER                                VALUE                    CON_ID
---------------------------------------- -------------------- ----------
NLS_LANGUAGE                             AMERICAN                      3
NLS_TERRITORY                            AMERICA                       3
NLS_CHARACTERSET                         AL32UTF8                      3

Which gives us:

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

However, this just tells you about the database character set.
That may not be what we are after!

The NLS_LANG should reflect the setting of the operating system character set of the client. In Windows there is no UTF-8 client, so you need to select the correct client code page.

On Linux, check out the value of $LANG

[oracle@ORA122 ~]$ echo $LANG
en_US.UTF-8

Chances are it’s UTF8, so it all matches up.

export NLS_LANG=AMERICAN_AMERICA.UTF8

If it is Windows, well that’s more complex as you need to match the code page of the client correctly. Windows generally matches the code page to the Unicode via its API’s, but you may have different code pages based upon whether you are on the command line (sqlplus) or OEM (SQL Developer)

A list of Windows code pages is held here.

On an English Windows client, the code page is 1252.

If you are using SQL Developer, in Tools => Preferences, check the encoding.

sqldev_encoding

You can find out what command-line code page you are running using “chcp”

Microsoft Windows [Version 10.0.14393]
(c) 2016 Microsoft Corporation. All rights reserved.

C:\Users\neil>chcp
Active code page: 850

NOTE: This is the DOS (sqlplus) codepage, not the GUI Codepage.

MS-DOS codepage Oracle Client character set (3rd part of NLS_LANG)
437 US8PC437
737 EL8PC737
850 WE8PC850
852 EE8PC852
857 TR8PC857
858 WE8PC858
861 IS8PC861
862 IW8PC1507
865 N8PC865
866 RU8PC866
set NLS_LANG=american_america.WE8PC850

sqlplus ...

 

It is also worth bearing in mind that the new database characterset default in Oracle 12.2 is now AL32UTF8.

Don’t forget to ensure your terminal session won’t mess up any character translation too. Set it to the correct server character set.

putty_utf8

Happy Inserting!

Use Unicode character sets, and go read the manual on this. It’s not straightforward!

 

Stats Collection Time Anomaly

Johnathan Lewis (@JLOracle) recently published a short post about Stats Collection Time, talking about the table dba_optstat_operation (and dba_optstat_operation_tasks ), which reminded me about (what I regard as) an anomaly in the output in the NOTES columns in Oracle 12C.

I won’t repeat why it’s useful to check these tables as Johnathans note and @MDWidlakes’s comment here should give you all you need to know.

The DBA_OPTSTAT_OPERATION.NOTES column contains the parameters passed into the DBMS_STATS command, so you know what was done. It also reports the DEFAULT used by the DBMS_STATS job. Well, it does if you call DBMS_STATS explicitly, but the standard overnight auto job just says “default”. Why doesn’t is expand on that the way the explicit call does? If the default was changed between runs, you may end up with very different results but with no indication why. Am I missing something?

The following 2 rows of data show the output from each run. Note that the DEFAULT for METHOD_OPT in this database has been changed from “FOR ALL COLUMNS SIZE AUTO” to “FOR ALL COLUMNS SIZE REPEAT”** but was not explicitly passed-in for either run.

DBMS_STATS.GATHER_SCHEMA_STATS – decodes the DEFAULTs

OPERATION : gather_schema_stats            
TARGET    : MYSCHEMA     
START_TIME: 15-SEP-16 07.04.47 
END_TIME  : 15-SEP-16 07.09.02 
STATUS    : COMPLETED                                   
JOB_NAME  : 
NOTES     : <params>
            <param name="block_sample" val="FALSE"/>
            <param name="cascade" val="NULL"/>
            <param name="concurrent" val="FALSE"/>
            <param name="degree" val="NULL"/>
            <param name="estimate_percent" val="DBMS_STATS.AUTO_SAMPLE_SIZE"/>
            <param name="force" val="FALSE"/>
            <param name="gather_fixed" val="FALSE"/>
            <param name="gather_temp" val="FALSE"/>
            <param name="granularity" val="AUTO"/>
            <param name="method_opt" val="FOR ALL COLUMNS SIZE REPEAT"/>
            <param name="no_invalidate" val="NULL"/>
            <param name="options" val="GATHER"/>
            <param name="ownname" val="MYSCHEMA"/>
            <param name="reporting_mode" val="FALSE"/>
            <param name="statid" val=""/>
            <param name="statown" val=""/>
            <param name="stattab" val=""/>
            <param name="stattype" val="DATA"/>
            </params>

Autotask Overnight Gather – doesn’t decode the DEFAULTs

OPERATION : gather_database_stats (auto)   
TARGET    : AUTO       
START_TIME: 15-SEP-16 22.01.20 
END_TIME  : 15-SEP-16 22.38.40 
STATUS    : COMPLETED            
JOB_NAME  : ORA$AT_OS_OPT_SY_1212  
NOTES     : <params>
            <param name="block_sample" val="FALSE"/>
            <param name="cascade" val="NULL"/>
            <param name="concurrent" val="FALSE"/>
            <param name="degree" val="DEFAULT_DEGREE_VALUE"/>
            <param name="estimate_percent" val="DEFAULT_ESTIMATE_PERCENT"/>
            <param name="granularity" val="DEFAULT_GRANULARITY"/>
            <param name="method_opt" val="DEFAULT_METHOD_OPT"/>
            <param name="no_invalidate" val="DBMS_STATS.AUTO_INVALIDATE"/>
            <param name="reporting_mode" val="FALSE"/>
            <param name="stattype" val="DATA"/>
            </params>

 

**You should control the histograms that you need to maintain your schema stats as you would like Oracle to see them. The Oracle default approach of lots of histograms can be costly to maintain and store, and any stats which use Adaptive Sampling (all prior to 12C, and Height-Balanced / Hybrid in 12C onwards) carry a risk, especially in OLTP systems.
FOR ALL COLUMNS SIZE REPEAT was useful until Oracle12, when a change in the algorithm makes it dangerous, and it should NOT be used.  You should be using GLOBAL_STATS_PREFS and TABLE_STATS_PREFS to control your stats.

 

Accessing STATUS columns efficiently

A frequently reoccuring design problem with relational databases is the issue locating unprocessed rows in a large table, so we know which rows of data are still yet to be processed.

The problem with a STATUS column is that it generally has low cardinality; there are probably only a handful of distinct values [(C)omplete, (E)rror, (U)nprocessed or something like that]. Most records will be (C)omplete. This makes STATUS a poor candidate for standard B-Tree indexation. In a high throughput OLTP database, using bitmap indexes is probably not an option due to concurrency.

[Aside: When coding flag columns in Oracle, ALWAYS use a VARCHAR2(1 CHAR) {or CHAR(1 CHAR) if you prefer, but a CHAR is a VARCHAR2 under the covers and occupies the same number of bytes}. This is in preferance to a NUMBER(1). which occupies more bytes for a “1” than a “0”, so when you update it, you run the risk of row migration, chained rows and a performance hit. Frequently, ORM’s like Hibernate code for NUMBER by default. Override this!]

So what are my options? There’s a short list of possible table accesses for a low cardinality column.

1. Table scan. In an OLTP database where you only want a tiny fraction of the rows in the table, this would be a bad chouce.
2. Index the accessed columns and accept the inevitable INDEX_SCAN or FAST_FULL_INDEX_SCAN. This is not great and you probably need a Histogram on the column to convince the optimizer to use the index for your low frequency values. Otherwise you may be back to the table scan.
3. Make the “Complete” status “NULL”.
4. Uses a function-based index which makes the Complete status seems to be NULL for a specific query.

So what’s with options 3 and 4, why are they good, and how do we use them?

Unlike some RBDMS’s, Oracle does not store NULL values in it’s simple (non-composite) b-tree indexes. Therefore, if you choose Option (3) and make your “Complete” status be represented by a NULL, you will maintain an index on STATUS in which the only values that are stored are values you are interested in. This makes the index very sexy to the optimizer as it will generally be very tiny. However, we face one small problem. Convincing Developers that having a NULL as a valid status can be difficult. A NULL is a non-representative value. It is not supposed to represent anything. It means “I don’t know”. It doesn’t behave the same an normal values. This tends to freak out Developers and designers sometimes.

That’s where Option 4 comes in. If we wrap the index definition in a CASE statement, to produce a function-based index, we have have a highly specific tailored index on our table. If the SQL predicate matches the query exactly, we get a serious performance payoff.

But don’t take my word for it. Here’s a worked example from my laptop:

 
Here’s the table, it’s data distribution (16m rows, and a handful we care about)

NEIL @ ORCL01 > desc test_table
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ID                            NOT NULL NUMBER
 STATUS                        NOT NULL VARCHAR2(1 CHAR)
 DESCRIPTION                   NOT NULL VARCHAR2(100 CHAR)

NEIL @ ORCL01 > select status,count(*) from test_table group by status

S   COUNT(*)
- ----------
E         16
C   16777216
Y         32

 
Here are the indexes on the table, and their sizes. As you can see, the function-based index is absolutely tiny, making it as attractive to storage admins as it is to the optimizer.

- alter table test_table add constraint test_table_pk primary key (id);
- create index test_table_CASE on test_table (case status when 'Y' then status else null end);
- create index test_table_COVER_COMP on test_table (status, id) compress 1;
- create index test_table_STATUS on test_table (status) compress 1;



NEIL @ ORCL01 > select segment_name,segment_type,sum(bytes/1024) kb from user_extents 
where segment_name like 'TEST_TABLE%' 
group by segment_type,segment_name order by 2 desc,1;

SEGMENT_NAME               SEGMENT_TYPE               KB
-------------------------- ------------------ ----------
TEST_TABLE                 TABLE                  555008
TEST_TABLE_CASE            INDEX                      64
TEST_TABLE_COVER_COMP      INDEX                  658432
TEST_TABLE_PK              INDEX                  319488
TEST_TABLE_STATUS          INDEX                  413696

Some Index stats:
INDEX_NAME                DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR STATUS     NUM_ROWS SAMPLE_SIZE LAST_ANAL
------------------------- ------------- ----------------------- ----------------------- ----------------- -------- ---------- ----------- ---------
TEST_TABLE_CASE                       1                       1                       6                 6 VALID            32          32 21-FEB-16
TEST_TABLE_COVER_COMP          16748149                       1                       1            125447 VALID      16748149      234974 21-FEB-16
TEST_TABLE_PK                  17003239                       1                       1             91391 VALID      17003239      492287 21-FEB-16
TEST_TABLE_STATUS                     3                   13828                   32011             96034 VALID      16257590      363295 21-FEB-16

 
Where we have a choice of useful indexes, we get a FAST FULL SCAN with a hefty cost. A histogram could have given us an index RANGE SCAN, which can be very good.
With no Histogram:

select id from test_table where status = 'Y';

Plan hash value: 1140618830

----------------------------------------------------------------------------------------------
| Id  | Operation            | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                       |       |       | 18753 (100)|          |
|*  1 |  INDEX FAST FULL SCAN| TEST_TABLE_COVER_COMP |  5592K|    42M| 18753   (1)| 00:00:01 |
----------------------------------------------------------------------------------------------

 
With a histogram in place on STATUS, you get a much better plan as the covering index avoids the need for the table look-up. You also get the risk that the optimizer may have bind variable peeking issues and other complications should we have lots of table joins.

select id from test_table where status = 'Y'

Plan hash value: 2912582684

------------------------------------------------------------------------------------------
| Id  | Operation        | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                       |       |       |     3 (100)|          |
|*  1 |  INDEX RANGE SCAN| TEST_TABLE_COVER_COMP |    32 |   256 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

NOTE: Ditching the covering index and just using the index on STATUS is pretty efficient too when combined with a histogram:

select id from test_table where status = 'Y'

Plan hash value: 2416598805

---------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                   |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TEST_TABLE        |    32 |   256 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | TEST_TABLE_STATUS |    32 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------


 
And now with the function-based index; having the case statement removing all statuses we are not interested-in for a tiny tidy index.

NOTE: The Predicate in the query must EXACTLY match the function-based index for it to be used.

select id from test_table where case status when 'Y' then status else null end = 'Y'

Plan hash value: 2073004851

-------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |       |       |     7 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TEST_TABLE      |    32 |   256 |     7   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | TEST_TABLE_CASE |    32 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Conclusion: For a highly skewed STATUS column you need a histogram, which is something you should mostly avoid in OLTP systems using BIND variables. Having a highly focussed function-based index allows for a tiny self-maintaining index which is guaranteed to only be used for queries that you want it to be used for.
 

NOTE: The original idea behind using NULLS to minimise index size came from the performance expert, Jonathan Lewis. I have implemented both NULL-as-complete design and case-based indexes at several clients, in varying forms, and always to great success.

Primarys Keys and their supporting indexes

Sometimes things just happen which makes you realise that stuff you thought all experienced DBA’s knew about isn’t common knowledge. I suppose it’s a side effect of working with the same evolving product for over 25 years at dozens of clients, when the colleague you are talking to has a mere decade of experience at fewer companies (and therefore less diversity of challenge).

Today I noticed that a release was creating a table, then an index, and then adding a Primary Key based upon the index. Pretty normal, but the indexes were non-unique. Interesting. Is that valid? (yes) It is sensible? (under some circumstances, also yes). Is it wrong but nobody noticed until it got to me? Probably.

However, there’s more variables at play here than first meets the eye. The Primary Key is being enforced by a pre-existing non-unique index, rather than create its own index “dynamically” (which would be unique under most circumstances). So therefore we have a few questions to answer

  1. Is the index being used optimal?
  2. How do I know it’s using a pre-created index and the constraint did not create its own index?
  3. What happens if I drop the PK constraint?

 

Scenario 1: create table, create constraint

You get a nice neat fast index enforcing the constraint. If you drop the constraint, the index will also get removed. This may or may not be what you want. You may need to put a new index back on the table…

Scenario 2: create table, create unique index, create constraint

You get a nice neat fast index enforcing the constraint. If you drop the constraint, the index does not get removed. This may or may not be what you want. The index which was left behind might cause a problem if you miss the fact it’s still around…

Scenario 3: create table, create non-unique index (same as constraint columns), create constraint

The index enforcing the constraint is slightly less efficient enforcing the constraint as it has to optimize to an index range scan, not a unique scan. However, if you drop the constraint you still have a non-unique index. This may be a very useful ability for bulk data loads with questionable data, although the use-cases are limited and tend to be Warehouse-centric.

If you have a deferrable constraint, you must have a non-unique index to enforce the constraint as there may temporarily be duplicate keys in the table part-way through the transaction.

Scenario 4: create table, create non-unique composite index, create constraint which uses leading columns of that index but not all of the columns

The index may be somewhat less efficient enforcing the constraint by having less rows per block and possibly more levels, but this inefficiency may help overall system efficiency by having to maintain less indexes – maintaining 2 indexes instead of 1 is a 100% overhead in index maintenance. The composite index cannot be a unique index.

 

Some worked example showing the above scenarios (ran on Oracle 12.1.0.2.0 in a PDB) :

NEIL @ ORCL01 > -- Scenario 1
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) ;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_PK              UNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index is gone...
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
no rows selected
NEIL @ ORCL01 > -- Scenario 2
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create unique index neil_i on neil(col_id);
Index created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) ;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               UNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index survives
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               UNIQUE
NEIL @ ORCL01 > -- Scenario 3
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create index neil_i on neil(col_id);
Index created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) using index neil_i;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               NONUNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index survives
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               NONUNIQUE
NEIL @ ORCL01 > -- Scenario 4
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create index neil_i_composite on neil(col_id,col_txt);
Index created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) using index neil_i_composite;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I_COMPOSITE     NONUNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index survives
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I_COMPOSITE     NONUNIQUE

So how do I tell if, when I drop my constraint, I will also drop the index? This does not appear to be exposed in any DBA_ view of which I am aware, so I have had to rely upon this post from Jonathan Lewis to help me decipher the information. You need to see if certain bits are set in the column sys.ind$.property. Here’s my worked example to show the information we seek:

NEIL @ ORCL01 > drop table neil_cons_create;
Table dropped.
NEIL @ ORCL01 > create table neil_cons_create (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > alter table neil_cons_create add constraint neil_cons_create_pk primary key (col_id);
Table altered.
NEIL @ ORCL01 > drop table neil_pre_create;
Table dropped.
NEIL @ ORCL01 > create table neil_pre_create (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create unique index neil_pre_create_i on neil_pre_create(col_id);
Index created.
NEIL @ ORCL01 > alter table neil_pre_create add constraint neil_pre_create_pk primary key (col_id);
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name like 'NEIL%';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_CONS_CREATE_PK  UNIQUE
NEIL_PRE_CREATE_I    UNIQUE

select do.owner,do.object_name,do.object_id,ind.property, 'Keep Index' action
  from sys.ind$ ind
       ,dba_objects do
where do.object_name like 'NEIL%'
  and do.object_id = ind.obj#
  and bitand(ind.property,4097) = 4097
union all
select do.owner,do.object_name,do.object_id,ind.property, 'Drop Index' action
  from sys.ind$ ind
      ,dba_objects do
where do.object_name like 'NEIL%'
  and do.object_id = ind.obj#
  and bitand(ind.property,4097) <> 4097
order by 1,2,3
/
OWNER        OBJECT_NAME         OBJECT_ID  PROPERTY   ACTION
------------ ------------------- ---------- ---------- --------------------
NEIL         NEIL_CONS_CREATE_PK      93814       4097 Keep Index
NEIL         NEIL_PRE_CREATE_I        93816          1 Drop Index

When did I update that row?

I had a requirement the other day to understand when some data had been changed, but there was no column on the table which showed this.

So how do I find out? Well I could go off mining redo and lots of other time consuming and exotic things, but you can use the Oracle Pseudocolumn ORA_ROWSCN. This gives the SCN assocaited with each row. Well, actually it usually doesn’t. It does not show when the individual row of data was changed but, by default, gives the last changed SCN for the block where the row of data lives.

If you want to know with accuracy the SCN for the row change, you need to create your table with the extension “rowdependencies”. This adds a hidden column to each row, taking 6 bytes and storing the SCN on a row-by-row basis with your data. NOTE: This is a CREATE TABLE option only. You can’t modify a table to add rowdependencies, and there are a few restrictions for tables where this is enabled. Check the documentation for your version.

So, we now have the SCN, whether for the BLOCK or the ROW. What good is that? Well, there’s a chance that Oracle will have remembered approximately when that SCN came out. I think you are guaranteed this for about 120 hours – nearly a week – but depending upon a number of factors including the flashback retention and undo retention times.

You can get a rough idea of the SCN time from V$LOG_HISTORY, a more accurate idea from SYS.SMON_SCN_TIME, or just use the SCN_TO_TIMESTAMP function to make your life easier! If you are within 120 hours and have rowdependencies enabled, it will be roughly accurate to the time of the commit, depending upon a couple of factors – please see comments.

Here’s a short worked example to show the sorting of SCN’s in both normal (block) and rowdependency-enabled tables. Note how the ORA_ROWSCN is the same for each row in the same block in the normal table.

 

00:29:34 NEIL @ ORCL01 > create table scn_block (col1 number, col2 date, c_scn number);
Table created.
00:29:34 NEIL @ ORCL01 > create table scn_row (col1 number, col2 date, c_scn number) rowdependencies;
Table created.
00:29:34 NEIL @ ORCL01 > insert into scn_block values (1,sysdate,userenv('commitscn') );
1 row created.
00:29:34 NEIL @ ORCL01 > commit;
Commit complete.
00:29:34 NEIL @ ORCL01 > host sleep 5
00:29:39 NEIL @ ORCL01 > insert into scn_row values (1,sysdate,userenv('commitscn') );
1 row created.
00:29:39 NEIL @ ORCL01 > commit;
Commit complete.
00:29:39 NEIL @ ORCL01 > host sleep 5
00:29:44 NEIL @ ORCL01 > insert into scn_block values (1,sysdate,userenv('commitscn') );
1 row created.
00:29:44 NEIL @ ORCL01 > commit;
Commit complete.
00:29:44 NEIL @ ORCL01 > host sleep 5
00:29:49 NEIL @ ORCL01 > insert into scn_row values (1,sysdate,userenv('commitscn') );
1 row created.
00:29:49 NEIL @ ORCL01 > commit;
Commit complete.
00:29:49 NEIL @ ORCL01 > column dt_1 format a30 truncate
00:29:49 NEIL @ ORCL01 > column dt_2 format a30 truncate
00:29:49 NEIL @ ORCL01 >
00:29:49 NEIL @ ORCL01 > select col1,col2,c_scn,ora_rowscn,scn_to_timestamp(c_scn) dt_1,scn_to_timestamp(ora_rowscn) dt_2 from scn_block;

COL1       COL2      C_SCN      ORA_ROWSCN DT_1                           DT_2
---------- --------- ---------- ---------- ------------------------------ ------------------------------
1          05-DEC-15 3670102    3670149    05-DEC-15 00.29.34.000000000   05-DEC-15 00.29.43.000000000
1          05-DEC-15 3670148    3670149    05-DEC-15 00.29.43.000000000   05-DEC-15 00.29.43.000000000

00:29:49 NEIL @ ORCL01 > select col1,col2,c_scn,ora_rowscn,scn_to_timestamp(c_scn) dt_1,scn_to_timestamp(ora_rowscn) dt_2 from scn_row;

COL1       COL2      C_SCN      ORA_ROWSCN DT_1                           DT_2
---------- --------- ---------- ---------- ------------------------------ ------------------------------
1          05-DEC-15 3670133    3670134    05-DEC-15 00.29.39.000000000   05-DEC-15 00.29.39.000000000
1          05-DEC-15 3670160    3670161    05-DEC-15 00.29.48.000000000   05-DEC-15 00.29.48.000000000

In an interesting convergance, whilst I was doing this, Martin Widlake was looking at the same thing in a slightly different way. How meta.

 

Locking Privileges in Oracle

What permissions do you need to lock rows on an Oracle table?
What about to lock the whole table?

It’s not quite as much as you may think!

Lets have a couple of users; schema_owner and user1

SQL> show user
USER is "SYS"
SQL> create user schema_owner identified by schema_owner;
User created.
SQL> grant connect,resource to schema_owner;
Grant succeeded.
SQL> grant unlimited tablespace to schema_owner;
Grant succeeded.
SQL> create user user1 identified by user1;
User created.
SQL> grant create session to user1;
Grant succeeded.

Now for a table and grants

SQL> conn schema_owner/schema_owner
Connected.
SQL> create table tab1 (col1 date, col2 number);
Table created.
SQL> insert into tab1 values (sysdate,1);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from tab1;
COL1		COL2
--------- ----------
14-JUL-15	   1
SQL> grant select on tab1 to user1;
Grant succeeded.

So, what can USER1 do with that table?

SQL> conn user1/user1
Connected.
SQL> select * from schema_owner.tab1;
COL1 COL2
--------- ----------
14-JUL-15 1

good

SQL> update schema_owner.tab1 set col2=2 where col2=1;
update schema_owner.tab1 set col2=2 where col2=1
*
ERROR at line 1:
ORA-01031: insufficient privileges

nice

SQL> insert into schema_owner.tab1 values (sysdate,2);
insert into schema_owner.tab1 values (sysdate,2)
*
ERROR at line 1:
ORA-01031: insufficient privileges

yeah

SQL> delete from schema_owner.tab1;
delete from schema_owner.tab1
*
ERROR at line 1:
ORA-01031: insufficient privileges

great

SQL> select * from schema_owner.tab1 for update;
COL1      COL2
--------- ----------
14-JUL-15          1

oh

SQL> lock table schema_owner.tab1 in exclusive mode;
Table(s) Locked.

What?!? Is this real? Has that REALLY lock the entire table with only SELECT permissions? Can I delete from that table from a different session + user which has permissions?

SQL> show user
USER is "SCHEMA_OWNER"
SQL> select * from schema_owner.tab1;
COL1      COL2
--------- ----------
14-JUL-15	   1
SQL> delete from schema_owner.tab1;
(no return....)

A quick look in gv$session will show you that USER1 is indeed blocking SCHEMA_OWNER despite only having SELECT privileges on the table:

select .... from gv$session;
CON_ID SID USERNAME	   SQL_ID	 STATUS   BS_STAT    BL_SID EVENT
------ --- --------------- ------------- -------- ---------- ------ ---------------------------
     3	47 USER1			 INACTIVE NO HOLDER  BLOCK  SQL*Net message from client
     3	55 SCHEMA_OWNER    5n1hw77std3h5 ACTIVE   VALID      47     enq: TM - contention

SQL> select * from dba_blockers
 2 ;

HOLDING_SESSION CON_ID
--------------- ------
47                   3

SQL> select * from dba_waiters;

WAITING_SESSION WAITING_CON_ID HOLDING_SESSION HOLDING_CON_ID LOCK_TYPE MODE_HELD MODE_REQUESTED LOCK_ID1 LOCK_ID2
--------------- -------------- --------------- -------------- -------------------------- ---------------------------------------- ---------------------------------------- ---------- ----------
 55                          3              47              3 DML                 Exclusive Row-X (SX) 96178 0

This is because of a side effect of an Oracle philosophy; “don’t do now what you may never need to do”. If Oracle can defer any actions from now, such as writing a dirty buffer to disk, or seeing if a session has permissions to perform an update when all you have done is request a lock, then it will, if possible, do it later.

You may request the lock so Oracle checks that you can access the object (SELECT), but you may never try to actually change the row, or table so it’s not necessary to see if you can modify the object…

This is a pretty problematic security hole; In Oracle 12c, a new table privilege has appeared: READ. If we re-run the above with GRANT READ instead of GRANT SELECT…

SQL> show user
USER is "USER1"
SQL> select grantee,privilege from user_tab_privs where table_name = 'TAB1';
GRANTEE              PRIVILEGE
-------------------- ----------
USER1                READ
SQ> select * from schema_owner.tab1;
COL1      COL2
--------- ----------
14-JUL-15          1

ok

SQ> select * from schema_owner.tab1 for update;
select * from schema_owner.tab1
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> lock table schema_owner.tab1 in exclusive mode;
lock table schema_owner.tab1 in exclusive mode
*
ERROR at line 1:
ORA-01031: insufficient privileges

Thats better!

So the next time someone says “it’s only SELECT permissions”, it’s not. You might want to check out using READ.

Developers

Just a small Sunday night anecdote with a wider point. I, or maybe a colleague, recently received an update statement from a developer. Now, this developer is long of tooth and is well versed in the ways of Oracle data manipulation.

The aforementioned update statement contained an interesting hint. BYPASS_UJVC. You may not have heard of this hint. It’s not commonly used, although it’s been around since Oracle 8.1.5. Mainly because it is both undocumented and unsupported by Oracle. In the right hands, it’s a very neat way around a problem of doing an update through a join where you would otherwise be restricted by the potential of having transient keys (i.e. multiple updates via the join giving random results). There’s a bunch of other blogs around about how/why/not to use it so I won’t waffle on here.

However, the Dev was disappointed when we [the DBA’s] told him to, erm, rewrite his code (probably as a merge – tends to let you get round the same problem), given he has been using this hint for as long as it’s been around (a long time!) but as it’s NOT supported we wouldn’t allow it. I’m not about to update millions of rows in a multi-billion row database with an unsupported function unless I have a MAJOR problem.

The point of the story is, that evening, we met up for drinks with another Developer (Dev2) whom we have both known for a couple of decades. Dev says to Dev2 “DBA’s – they are never happy, going around being all DBA-ish and No-you-cant today”, and (unprompted) Dev2 says “BYPASS_UJVC?”

I like Developers – I used to be one – but the role of Developer and DBA should be symbiotic and not adversarial as it can become upon occasion. We should work together – all of the best system I have delivered have a great relationship between Dev and DBA – but don’t ask me to bend my database out of shape just so you can take a short cut. Talk about physicalities, data access, data life-cycle, volume and performance before a line of code has been written and we will all work much better together. If all parties feel they have some ownership with the system, there is a greater chance of success and joy.