Oracle Database GLOBAL_NAME, DB_DOMAIN and database links

I needed to create a database link on a 19.7C Oracle DB recently. Importantly, the database link name must match the database name as security dictates that GLOBAL_NAMES=TRUE for this set of databases. The DB was created using the gui on an ODA, meaning there were quite a few initialisation parameters set by the gui.

Lets see what happened and what I had to do to actually get a database link with the same name as the database.

> create database link ORA11A connect to neil identified by neil using 'ORA11A';
Database link created.

> select owner,db_link,username,host from dba_db_links;

OWNER DB_LINK                 USERNAME     HOST         
----- ----------------------- ------------ ------------ 
SYS   ORA11A.CHANDLER.UK.COM  NEIL         ORA11A       

> drop database link ORA11A;

What? Why was a domain being appended to the database link name?

> select name,value,description from v$parameter where name = 'db_domain';

NAME       VALUE            DESCRIPTION
---------- ---------------- --------------------------------------------------------------------------------
db_domain  chandler.uk.com  directory part of global database name stored with CREATE DATABASE

> alter system reset db_domain scope=spfile sid='*';
(restart db)
> select name,value,description from v$parameter where name = 'db_domain';

NAME       VALUE            DESCRIPTION
---------- ---------------- --------------------------------------------------------------------------------
db_domain                   directory part of global database name stored with CREATE DATABASE

> create database link ORA11A connect to neil identified by neil using 'ORA11A';
> select owner,db_link,username,host from dba_db_links;

OWNER DB_LINK                 USERNAME     HOST        
----- ----------------------- ------------ ------------
SYS   ORA11A.CHANDLER.UK.COM  NEIL         ORA11A      

So it’s not coming from the DB_DOMAIN! But there’s a clue in the description “directory part of global database name stored with CREATE DATABASE“. Lets check the global name:

select * from global_name;
GLOBAL_NAME
------------------------------
ORA19A.CHANDLER.UK.COM

So maybe it’s taking the domain from here? Can I prove that?

>alter database rename global_name to "ORA19A.COM";
Database altered.

> select * from global_name;
GLOBAL_NAME
------------------------------
ORA19A.COM

> create database link ORA11A connect to neil identified by neil using 'ORA11A';
Database link created.

> select owner,db_link,username,host from dba_db_links;
OWNER DB_LINK                 USERNAME     HOST 
----- ----------------------- ------------ ------------ 
SYS   ORA11A.CHANDLER.UK.COM   NEIL        ORA11A
SYS   ORA11A.COM               NEIL        ORA11A 

[note that I had not deleted the previous DB link so it was still there!]

So, all  I needs to do now is remove the “.COM”

> alter database rename global_name to "ORA19A";
Database altered.

> select * from global_name;
GLOBAL_NAME
------------------------------
ORA19A.COM

Oh dear. You can’t remove a domain suffix from a global name. If you don’t specify the domain, Oracle helpfully adds it right back in there for you.

You need to correct this in an unorthodox way.
The GLOBAL_NAME is really a value in SYS.PROPS$

> select text from dba_views where view_name = 'GLOBAL_NAME';

TEXT
-----------------------------------------------------------
select value$ from sys.props$ where name = 'GLOBAL_DB_NAME'

So we will have to update the value in SYS.PROPS$ directly.

THIS IS NOT A SUPPORTED ACTION.
DON’T DO IT WITHOUT A CONVERSATION WITH ORACLE SUPPORT.
[Check MOS note: 1018063.102]
***updating sys.props$ incorrectly can corrupt your entire database ***

 

> select value$,comment$ from sys.props$ where name = 'GLOBAL_DB_NAME';

VALUE$                         COMMENT$
------------------------------ --------------------------------------------------------------------------------
ORA19A.COM                     Global database name

> update sys.props$ set value$='ORA19A' where name = 'GLOBAL_DB_NAME';
1 row updated.

> commit;
Commit complete.

select value$,comment$ from sys.props$ where name = 'GLOBAL_DB_NAME';
VALUE$                         COMMENT$
------------------------------ --------------------------------------------------------------------------------
ORA19A                         Global database name

> create database link ORA11A connect to neil identified by neil using 'ORA11A';
Database link created.

> select owner,db_link,username,host from dba_db_links;
OWNER DB_LINK                 USERNAME     HOST 
----- ----------------------- ------------ ------------ 
SYS ORA11A.CHANDLER.UK.COM    NEIL         ORA11A
SYS ORA11A.COM                NEIL         ORA11A
SYS ORA11A                    NEIL         ORA11A

Finally I can use the DB_LINK with GLOBAL_NAME=TRUE!

If only I’d created the database without specifying a db_domain in the first place…

 

Oracle 19C New Feature Availability

trump-exasperatedSince Oracle Open World 2018, Oracle have been trumpetting a few cool new features in the Oracle 19C database, the headline two for administrators being “Automatic Indexing” and “Real-Time Statistics“.

With the release of Oracle 19.2 on Exadata (on-premises – not yet on Cloud!) this week, we also got the documentation released which allow us to answer a very important question: on which platforms will we be able to use these 2 (and other) new awesome features. We may not like the answers – Oracle has decreed that we cannot have them on-premises for SE2, Enterprise Edition or on an ODA.

This is confined purely to Exadata on-prem and the cloud offerings. That’s it. There are no technical reasons why this should be the case – it’s just code – so the restrictions can only be marketing-based (like Hybrid Columnar Compression).

I’m disappointed, but it does mean that the over 80% of Oracle database clients still not living in the cloud should continue to need that aspect of my services!

Oracle Factoring

OracleLicensingFactoring

 

Which Oracle Release are you using?

Post Date: August 2018!

Recently an awesome Oracle Guru friend of mine questioned someone who was installing 11.2.0.4 with the word “seriously”, which is think shows that Oracle staff sometimes don’t live in the same technological world as the rest of business.

My response was:

11.2.0.4 is normal. In the real world:

– large corps mostly use old versions
– consultants look at current versions
– Oracle staff look at unreleased versions

I have known instances of Oracle staff blogging about how a feature works when, in the officially released versions, it didn’t. It only worked that way in a version which was released some months later. There was no reference to the release and the fact that there was a significant functional change between releases (but I suppose that’s a blog and not “official” documentation – the official documentation said nothing at all about how that particular feature worked. Nothing! So thank you mystery blogger.)

Anyway, the point of this post was I then did a small twitter poll to my most excellent and cosy band of followers to see what Oracle releases people were using. I asked 2 questions (because twitter is limited) and here’s the results:

oracle_version_highest

So more people have some form of 12 in the DB, but only 7% have 18 in Production. This at a time when most Oracle staff are thinking about Oracle 20 and 21, as Oracle 19 is done and just awaiting release. Think about that, Oracle… Whenever I am at a presentation by an Oracle PM, I think “wow – I might be able to use those new features in 2-5 years”.

oracle_version_lowest

So very few people have 12.x as their lowest version (which would include 18 as that’s really 12.2.0.2) and MORE have 9, 8 or 7 as their major headache! Yes – there are more on 9, 8 and 7 than are using 18 in Production. Lets say that once more. There are more on 9, 8 and 7 than are using 18 in Production

So why upgrade? Very few databases take advantage of all of the latest sexy features. I suspect that many of the applications still being produced could run on Oracle 7.3.4. – more so as the proliferation of ORM’s like Hibernate has left a generation of developers with little appreciation of the database and how to take advantage of it**. So why upgrade? These days? Security. Patches. Support. Without those 3 things, you are living on hope, hope that nothing goes wrong as you’ll struggle to find anyone to fix it – including Oracle. Hoping that nobody tries to hack your 8.1.7 database as it’s a Swiss Cheese of vulnerabilities, like all 7, 8, 8i, 9i, 10G DB’s. Not that we hear about systems being compromised every day on the news.

Anecdote : By coincidence I was talking to a client at about the same time and whilst they are a mostly 12.1 shop, they still had an old 8i database hanging around… as usual it was going to  be “retired soon” (which in my experience means sometime in the next 15-20 years) and wasn’t worth the time and effort to be upgraded or even do a business case to upgrade it!

**Anecdote : At a client a few years ago, an excellent Java Developer asked me to put an index on a flag column. I pointed out that with only 3 values that an index wouldn’t help, and as this was OLTP a bitmap index wasn’t appropriate due to concurrency issues. He said that with 3 values indexed, his query would be 3 times faster! We sat down and I explained some database fundamentals to him, at which point he said “don’t put an index on there – that would be a stupid idea”. A few weeks later he came back over and asked about SQL queries “I’m trying to aggregate this data – can the database help?”. I spent 30 minutes showing him in-line views and windowing analytic functions and we wrote the code he needed for his output. “Wow! You have just saved me 3 days of Java coding…” – he was going to pull everything into Java and process it there, so as well as 3 days of coding, we also saved the SAN, the network and a whole bunch of CPU by dealing with data at the database layer – which is always the most efficient place to deal with it!

Getting Started with GoldenGate

gettingstartedwithgoldengateNot a blog post per-se, but a link to my article in the UKOUG quarterly magazine OracleScene. It is a great publication with some simply fantastic articles. Its completely free, and is available online as well as in print at Oracle offices in the UK and at UKOUG events. My article will take you through a simple step-by-step Oracle centric one way replication on VM Virtual Box. From nothing to fully working replication. Sexy! Hope you enjoy it.

Accessing STATUS columns efficiently

A frequently reoccuring design problem with relational databases is the issue locating unprocessed rows in a large table, so we know which rows of data are still yet to be processed.

The problem with a STATUS column is that it generally has low cardinality; there are probably only a handful of distinct values [(C)omplete, (E)rror, (U)nprocessed or something like that]. Most records will be (C)omplete. This makes STATUS a poor candidate for standard B-Tree indexation. In a high throughput OLTP database, using bitmap indexes is probably not an option due to concurrency.

[Aside: When coding flag columns in Oracle, ALWAYS use a VARCHAR2(1 CHAR) {or CHAR(1 CHAR) if you prefer, but a CHAR is a VARCHAR2 under the covers and occupies the same number of bytes}. This is in preferance to a NUMBER(1). which occupies more bytes for a “1” than a “0”, so when you update it, you run the risk of row migration, chained rows and a performance hit. Frequently, ORM’s like Hibernate code for NUMBER by default. Override this!]

So what are my options? There’s a short list of possible table accesses for a low cardinality column.

1. Table scan. In an OLTP database where you only want a tiny fraction of the rows in the table, this would be a bad chouce.
2. Index the accessed columns and accept the inevitable INDEX_SCAN or FAST_FULL_INDEX_SCAN. This is not great and you probably need a Histogram on the column to convince the optimizer to use the index for your low frequency values. Otherwise you may be back to the table scan.
3. Make the “Complete” status “NULL”.
4. Uses a function-based index which makes the Complete status seems to be NULL for a specific query.

So what’s with options 3 and 4, why are they good, and how do we use them?

Unlike some RBDMS’s, Oracle does not store NULL values in it’s simple (non-composite) b-tree indexes. Therefore, if you choose Option (3) and make your “Complete” status be represented by a NULL, you will maintain an index on STATUS in which the only values that are stored are values you are interested in. This makes the index very sexy to the optimizer as it will generally be very tiny. However, we face one small problem. Convincing Developers that having a NULL as a valid status can be difficult. A NULL is a non-representative value. It is not supposed to represent anything. It means “I don’t know”. It doesn’t behave the same an normal values. This tends to freak out Developers and designers sometimes.

That’s where Option 4 comes in. If we wrap the index definition in a CASE statement, to produce a function-based index, we have have a highly specific tailored index on our table. If the SQL predicate matches the query exactly, we get a serious performance payoff.

But don’t take my word for it. Here’s a worked example from my laptop:

 
Here’s the table, it’s data distribution (16m rows, and a handful we care about)

NEIL @ ORCL01 > desc test_table
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ID                            NOT NULL NUMBER
 STATUS                        NOT NULL VARCHAR2(1 CHAR)
 DESCRIPTION                   NOT NULL VARCHAR2(100 CHAR)

NEIL @ ORCL01 > select status,count(*) from test_table group by status

S   COUNT(*)
- ----------
E         16
C   16777216
Y         32

 
Here are the indexes on the table, and their sizes. As you can see, the function-based index is absolutely tiny, making it as attractive to storage admins as it is to the optimizer.

- alter table test_table add constraint test_table_pk primary key (id);
- create index test_table_CASE on test_table (case status when 'Y' then status else null end);
- create index test_table_COVER_COMP on test_table (status, id) compress 1;
- create index test_table_STATUS on test_table (status) compress 1;



NEIL @ ORCL01 > select segment_name,segment_type,sum(bytes/1024) kb from user_extents 
where segment_name like 'TEST_TABLE%' 
group by segment_type,segment_name order by 2 desc,1;

SEGMENT_NAME               SEGMENT_TYPE               KB
-------------------------- ------------------ ----------
TEST_TABLE                 TABLE                  555008
TEST_TABLE_CASE            INDEX                      64
TEST_TABLE_COVER_COMP      INDEX                  658432
TEST_TABLE_PK              INDEX                  319488
TEST_TABLE_STATUS          INDEX                  413696

Some Index stats:
INDEX_NAME                DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR STATUS     NUM_ROWS SAMPLE_SIZE LAST_ANAL
------------------------- ------------- ----------------------- ----------------------- ----------------- -------- ---------- ----------- ---------
TEST_TABLE_CASE                       1                       1                       6                 6 VALID            32          32 21-FEB-16
TEST_TABLE_COVER_COMP          16748149                       1                       1            125447 VALID      16748149      234974 21-FEB-16
TEST_TABLE_PK                  17003239                       1                       1             91391 VALID      17003239      492287 21-FEB-16
TEST_TABLE_STATUS                     3                   13828                   32011             96034 VALID      16257590      363295 21-FEB-16

 
Where we have a choice of useful indexes, we get a FAST FULL SCAN with a hefty cost. A histogram could have given us an index RANGE SCAN, which can be very good.
With no Histogram:

select id from test_table where status = 'Y';

Plan hash value: 1140618830

----------------------------------------------------------------------------------------------
| Id  | Operation            | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                       |       |       | 18753 (100)|          |
|*  1 |  INDEX FAST FULL SCAN| TEST_TABLE_COVER_COMP |  5592K|    42M| 18753   (1)| 00:00:01 |
----------------------------------------------------------------------------------------------

 
With a histogram in place on STATUS, you get a much better plan as the covering index avoids the need for the table look-up. You also get the risk that the optimizer may have bind variable peeking issues and other complications should we have lots of table joins.

select id from test_table where status = 'Y'

Plan hash value: 2912582684

------------------------------------------------------------------------------------------
| Id  | Operation        | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                       |       |       |     3 (100)|          |
|*  1 |  INDEX RANGE SCAN| TEST_TABLE_COVER_COMP |    32 |   256 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

NOTE: Ditching the covering index and just using the index on STATUS is pretty efficient too when combined with a histogram:

select id from test_table where status = 'Y'

Plan hash value: 2416598805

---------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                   |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TEST_TABLE        |    32 |   256 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | TEST_TABLE_STATUS |    32 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------


 
And now with the function-based index; having the case statement removing all statuses we are not interested-in for a tiny tidy index.

NOTE: The Predicate in the query must EXACTLY match the function-based index for it to be used.

select id from test_table where case status when 'Y' then status else null end = 'Y'

Plan hash value: 2073004851

-------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |       |       |     7 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TEST_TABLE      |    32 |   256 |     7   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | TEST_TABLE_CASE |    32 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Conclusion: For a highly skewed STATUS column you need a histogram, which is something you should mostly avoid in OLTP systems using BIND variables. Having a highly focussed function-based index allows for a tiny self-maintaining index which is guaranteed to only be used for queries that you want it to be used for.
 

NOTE: The original idea behind using NULLS to minimise index size came from the performance expert, Jonathan Lewis. I have implemented both NULL-as-complete design and case-based indexes at several clients, in varying forms, and always to great success.

Oracle Audit Control part 3 – OS files

Just realized that I wrote a follow-up to Part 2 two years ago and never posted it! So here’s the stripped-out key facts to round off auditing control a bit more:

Deleting O/S audit files is, like, totally necessary. In Oracle 12C (12.1), they are created by default as AUDIT_SYS_OPERATIONS defaults to TRUE now, so you are going to be filling up your AUDIT_FILE_DEST whether you realized it or not.

[NOTE: In theory, this could end up being a performance problem as well as a space problem if you have many millions files in the AUDIT_FILE_DEST. In Oracle 11G, when allocating an AUDIT file, it would check to see if a file for that SPID already existed and add an incremental number, The act of calling vx_dirbread and vx_dirscan can get terribly slow if there are lots of files to wade through. This mechanism is different in 12C but I have not traced the kernel calls to see if the vx_ calls have gone away.]

To cleanup the O/S, we use the same commands as we did in Part 2, but changed to use an AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS

You also need to be aware of the RAC_INSTANCE_NUMBER. For database-level audit, it’s irrelevant. For single instance, it is also irrelevant, but with RAC, this parameter tells you which node to clean up as they are all individual installs with their own audit files (assuming a Grid Infrastructure install per node.)

Commands:

-- Show the BEFORE sizes
host ssh server01 du -m -s /u01/app/oracle/admin/ORCL/a*
host ssh server02 du -m -s /u01/app/oracle/admin/ORCL/a*
begin

-- Loop around every instance in the RAC cluster and run the command
for i in (select inst_id from gv$instance)
loop

-- cleanup AUDIT_FILE_DEST (operating system audit files)
-- set cleanup for 60 days
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
LAST_ARCHIVE_TIME => systimestamp-60,
RAC_INSTANCE_NUMBER => i.inst_id );

-- And cleanup based upon that date
DBMS_AUDIT_MGMT.clean_audit_trail(
audit_trail_type        => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
use_last_arch_timestamp => TRUE);

end loop;

end;
/

-- And check the sizes AFTER
host ssh server01 du -m -s /u01/app/oracle/admin/ORCL/a*
host ssh server02 du -m -s /u01/app/oracle/admin/ORCL/a*

 


 

Output:

15:58:56 SYS @ ORCL1 > host ssh server01 du -m -s /u01/app/oracle/admin/ORCL/a*
776 /u01/app/oracle/admin/ORCL/adump
15:58:56 SYS @ ORCL1 > host ssh server02 du -m -s /u01/app/oracle/admin/ORCL/a*
694 /u01/app/oracle/admin/ORCL/adump
15:58:56 SYS @ ORCL1 >
15:58:56 SYS @ ORCL1 >
15:58:56 SYS @ ORCL1 > begin
15:58:56   2
15:58:56   3   for i in (select inst_id from gv$instance)
15:58:56   4   loop
15:58:56   5
15:58:56   6      -- cleanup AUDIT_FILE_DEST (operating system audit files)
15:58:56   7      -- set cleanup for 60 days
15:58:56   8      DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
15:58:56   9      AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
15:58:56  10      LAST_ARCHIVE_TIME => systimestamp-50,
15:58:56  11      RAC_INSTANCE_NUMBER => i.inst_id );
15:58:56  12
15:58:56  13      -- And cleanup based upon that date
15:58:56  14      DBMS_AUDIT_MGMT.clean_audit_trail(
15:58:56  15        audit_trail_type        => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
15:58:56  16        use_last_arch_timestamp => TRUE);
15:58:56  17
15:58:56  18   end loop;
15:58:56  19
15:58:56  20  end;
15:58:56  21  /
PL/SQL procedure successfully completed.

15:58:56 SYS @ ORCL1 >
15:58:56 SYS @ ORCL1 >
15:58:56 SYS @ ORCL1 > host ssh server01 du -m -s /u01/app/oracle/admin/ORCL/a*
354 /u01/app/oracle/admin/ORCL/adump
15:58:57 SYS @ ORCL1 > host ssh server02 du -m -s /u01/app/oracle/admin/ORCL/a*
364 /u01/app/oracle/admin/ORCL/adump
15:58:57 SYS @ ORCL1 >
 


 

 
And just one final bit. If you change the default

AUDIT_TRAIL=DB to
AUDIT_TRAIL=DB, EXTENDED

the audit writes the first 2,000 characters of the SQL command to DBA_AUDIT_TRAIL.SQL_TEXT, so you get the full text of the SQL command audited, rather than just the action. Very handy!

Accessing a user when you don’t know the password

There are times that you may need to logon to a database user, probably a schema owner to do a release, but you don’t know the password. You may not be able to (easily) change the password as it could be embedded in application connect strings or worse.

If may not be possible simply to change your session using alter session set current_schema=<schema-to-be-changed>; to auto-prefix all of your selects with the schema, especiually if the release references “USER_” views, which is unaffected by the session setting.

You need to become the account.

So, what you need to do is record the current password encryption, change the password, logon and do your maintenance, logoff and change the password back!

And this is how you do it:
Create an account:

04:38:35 SYS @ ORCL01 > create user hackme identified by password1;

User created.

04:38:35 SYS @ ORCL01 > grant connect,resource to hackme;

Grant succeeded.

Grab the encryption.This is stored in SYS.USER$.SPARE4 plus SYS.USER$.PASSWORD:

04:38:35 SYS @ ORCL01 > select name,'alter user '||name||' identified by values '''||spare4||';'||password||''';' command from sys.user$ where name = 'HACKME'
04:38:35   2  /

NAME       COMMAND
---------- ------------------------------------------------------------------------------------------------------------------------
HACKME     alter user HACKME identified by values 'S:59F38E64D3914BB9396C5D4B968380676333EA7CB34F2471A85C4770A7BA;H:2D3693D1357CF012D9A11EFE3D792C0C;T:B2261F70475F3BD6173867C68427E346C53216E3EC305121DDAF4E13E72E6889DF1E314934F3C5F46E5F12B82D8AC144955C937413FD192904A2762D66B31A872429AB78E72AFC2BC4101E68DB5903A6;4345E749C3EBB34A';

Now we can change the password, logon with the new password, logoff back to a DBA and change it back using the previously captured command

04:38:35 SYS @ ORCL01 > alter user hackme identified by hacker;

User altered.

04:38:35 SYS @ ORCL01 > connect hackme/hacker;
Connected.

04:38:35 HACKME @ ORCL01 > show user
USER is "HACKME"

04:38:35 HACKME @ ORCL01 > connect sys/oracle as sysdba
Connected.

04:38:35 SYS @ ORCL01 > alter user HACKME identified by values 'S:59F38E64D3914BB9396C5D4B968380676333EA7CB34F2471A85C4770A7BA;H:2D3693D1357CF012D9A11EFE3D792C0C;T:B2261F70475F3BD6173867C68427E346C53216E3EC305121DDAF4E13E72E6889DF1E314934F3C5F46E5F12B82D8AC144955C937413FD192904A2762D66B31A872429AB78E72AFC2BC4101E68DB5903A6;4345E749C3EBB34A';
User altered.

04:38:57 SYS @ ORCL01 > conn hackme/password1
Connected.

Magic!

You can also use DBMS_METADATA to get the encryption;

04:39:08 SYS @ ORCL01 >  set long 10000

04:39:08 SYS @ ORCL01 >  select dbms_metadata.get_ddl('USER','HACKME') command from dual;

COMMAND
--------------------------------------------------------------------------------

CREATE USER "HACKME" IDENTIFIED BY VALUES 'S:F299C40420DD341AF9AC4AC89C59A2BB1DFCEF01DB5E3C2B5AD837100117;H:2D3693D1357CF012D9A11EFE3D792C0C;T:101F2A697CA5F77B089C4ECA8EE2DDB82E340D46FE60712445699C5715C3C71BA06532F52CFA987076B51254E5E5A565C44E9F7479018F924707F30874A0BF958D1B8935B7434CF993D3346FF53F28B4;4345E749C3EBB34A'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP"

Please read the COMMENTS to learn about Proxy Accounts – an (admin) alternative from 10G onwards!

Killing Sessions

You have a session which is out of control. You decide to kill it. What options are available to you as a DBA?

Assuming you don’t have access to the O/S to be able to use O/S commands to kill the session (using kill -9 in Unix/Linux or orakill in windows to kill the thread within the process – “orakill sid thread”), you probably turn to:

alter system kill session 'sid, serial#, @inst_id' immediate;

(note the undocumented @inst_id, which can be used to identify specific RAC instances, rather than the instance you are connected to) Note that immediate does not change the processing, but just returns control to the command line immediately.

You can generate this command using

select 'alter system kill session ''' || sid || ', ' || serial# || ', @' || inst_id || ''';' from gv$session;

 

However, despite the name, this does not actually kill the session. You may get a pause issuing this command, followed by a “session marked for kill” message. The session then gets a status of “KILLED” in gv$session, but the session does not go away. As a further side effect, frequently the O/S process id is removed from gv$process, so you can’t now locate the process in the O/S to (ora)kill it.

What actually happens is a bit gets set against the session which effectively says “When you next check this bit, kill yourself”. But sometimes the session is busy, is rolling back (check for changing values in gv$transaction.used_urec), is waiting for a reply which never comes or is caught in a tight loop, and never gets around to checking the bit, and so it doesn’t die.

Does anyone know if pmon comes along, notices that you have a session marked for kill, kills it and initiates recovery for the session???

So, what’s a DBA to do? How do I actually kill the session? Step forward little-known command:

alter system disconnect session 'sid,serial#,@inst_id' POST_TRANSACTION | IMMEDIATE

And to generate it:
select 'alter system disconnect session ''' || sid || ', ' || serial# || ', @' || inst_id || ''';' from gv$session;

Again, note the undocumented @inst_id option for RAC.

What this command does is effectively kill the Operating system process. Right now (or after the next commit/rollback if you include the option POST_TRANSACTION.) It is the same as issuing an (ora)kill command.The IMMEDIATE here means “kill it now!”

Bang! and the session is gone.

Don’t kill the wrong session… you may well take your database down.

Primarys Keys and their supporting indexes

Sometimes things just happen which makes you realise that stuff you thought all experienced DBA’s knew about isn’t common knowledge. I suppose it’s a side effect of working with the same evolving product for over 25 years at dozens of clients, when the colleague you are talking to has a mere decade of experience at fewer companies (and therefore less diversity of challenge).

Today I noticed that a release was creating a table, then an index, and then adding a Primary Key based upon the index. Pretty normal, but the indexes were non-unique. Interesting. Is that valid? (yes) It is sensible? (under some circumstances, also yes). Is it wrong but nobody noticed until it got to me? Probably.

However, there’s more variables at play here than first meets the eye. The Primary Key is being enforced by a pre-existing non-unique index, rather than create its own index “dynamically” (which would be unique under most circumstances). So therefore we have a few questions to answer

  1. Is the index being used optimal?
  2. How do I know it’s using a pre-created index and the constraint did not create its own index?
  3. What happens if I drop the PK constraint?

 

Scenario 1: create table, create constraint

You get a nice neat fast index enforcing the constraint. If you drop the constraint, the index will also get removed. This may or may not be what you want. You may need to put a new index back on the table…

Scenario 2: create table, create unique index, create constraint

You get a nice neat fast index enforcing the constraint. If you drop the constraint, the index does not get removed. This may or may not be what you want. The index which was left behind might cause a problem if you miss the fact it’s still around…

Scenario 3: create table, create non-unique index (same as constraint columns), create constraint

The index enforcing the constraint is slightly less efficient enforcing the constraint as it has to optimize to an index range scan, not a unique scan. However, if you drop the constraint you still have a non-unique index. This may be a very useful ability for bulk data loads with questionable data, although the use-cases are limited and tend to be Warehouse-centric.

If you have a deferrable constraint, you must have a non-unique index to enforce the constraint as there may temporarily be duplicate keys in the table part-way through the transaction.

Scenario 4: create table, create non-unique composite index, create constraint which uses leading columns of that index but not all of the columns

The index may be somewhat less efficient enforcing the constraint by having less rows per block and possibly more levels, but this inefficiency may help overall system efficiency by having to maintain less indexes – maintaining 2 indexes instead of 1 is a 100% overhead in index maintenance. The composite index cannot be a unique index.

 

Some worked example showing the above scenarios (ran on Oracle 12.1.0.2.0 in a PDB) :

NEIL @ ORCL01 > -- Scenario 1
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) ;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_PK              UNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index is gone...
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
no rows selected
NEIL @ ORCL01 > -- Scenario 2
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create unique index neil_i on neil(col_id);
Index created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) ;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               UNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index survives
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               UNIQUE
NEIL @ ORCL01 > -- Scenario 3
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create index neil_i on neil(col_id);
Index created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) using index neil_i;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               NONUNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index survives
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               NONUNIQUE
NEIL @ ORCL01 > -- Scenario 4
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create index neil_i_composite on neil(col_id,col_txt);
Index created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) using index neil_i_composite;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I_COMPOSITE     NONUNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index survives
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I_COMPOSITE     NONUNIQUE

So how do I tell if, when I drop my constraint, I will also drop the index? This does not appear to be exposed in any DBA_ view of which I am aware, so I have had to rely upon this post from Jonathan Lewis to help me decipher the information. You need to see if certain bits are set in the column sys.ind$.property. Here’s my worked example to show the information we seek:

NEIL @ ORCL01 > drop table neil_cons_create;
Table dropped.
NEIL @ ORCL01 > create table neil_cons_create (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > alter table neil_cons_create add constraint neil_cons_create_pk primary key (col_id);
Table altered.
NEIL @ ORCL01 > drop table neil_pre_create;
Table dropped.
NEIL @ ORCL01 > create table neil_pre_create (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create unique index neil_pre_create_i on neil_pre_create(col_id);
Index created.
NEIL @ ORCL01 > alter table neil_pre_create add constraint neil_pre_create_pk primary key (col_id);
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name like 'NEIL%';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_CONS_CREATE_PK  UNIQUE
NEIL_PRE_CREATE_I    UNIQUE

select do.owner,do.object_name,do.object_id,ind.property, 'Keep Index' action
  from sys.ind$ ind
       ,dba_objects do
where do.object_name like 'NEIL%'
  and do.object_id = ind.obj#
  and bitand(ind.property,4097) = 4097
union all
select do.owner,do.object_name,do.object_id,ind.property, 'Drop Index' action
  from sys.ind$ ind
      ,dba_objects do
where do.object_name like 'NEIL%'
  and do.object_id = ind.obj#
  and bitand(ind.property,4097) <> 4097
order by 1,2,3
/
OWNER        OBJECT_NAME         OBJECT_ID  PROPERTY   ACTION
------------ ------------------- ---------- ---------- --------------------
NEIL         NEIL_CONS_CREATE_PK      93814       4097 Keep Index
NEIL         NEIL_PRE_CREATE_I        93816          1 Drop Index