Fundamental Security Part Eight – Unified Audit

January 2024 – Oracle 19C+

If you are Oracle 12.1 onwards, and you have not explicitly disabled it, you are using Unified Audit

We previously discussed Data Encryption At Rest. Now lets talk about Unified Audit, and why you should be using it


When you create an Oracle database, there is a very small amount of Unified Auditing pre-enabled, and no “traditional” auditing. But why audit? Well, do you know what’s happening in the database? Who is connecting? If anyone is accessing data that they should not? Changing Database Objects? Creating backdoors? What is lurking beneath the surface?

Traditional Oracle Auditing:
It’s deprecated in 21C and desupported in 23C. Stop using it. It’s difficult
to gather all of the data from several diverse locations
in the Database and on the O/S – and much worse if
you have multiple RAC clusters! Convert to Unified Audit.
See MOS: 2909718.1 for a converter

Unified Auditing

For some information about enabling Unified Audit in Pure mode and other bits, check this blog post

Unified Auditing uses Policies to decide what to audit. You can enable and disable these if you have appropriate AUDIT_ADMIN privileges. Typically there are 3 enabled by 23C (the exact enablement varies by release and installed options).

SQL> select * from audit_unified_enabled_policies;

POLICY_NAME ENABLED_OPT ENTITY_NAME ENTITY SUC FAI
ORA_LOGON_FAILURES BY USER ALL USERS USER NO YES
ORA_SECURECONFIG BY USER ALL USERS USER YES YES
ORA$DICTIONARY_SENS_COL_ACCESS BY USER ALL USERS USER YES YES <-23C
ORA_DV_DEFAULT_PROTECTION BY USER ALL USERS USER YES YES <- if DV
ORA_DV_SCHEMA_CHANGES BY USER ALL USERS USER YES YES <- if DV

ORA_LOGON_FAILURES – detects failed logins
ORA_SECURECONFIG – audits high-level DBA type actions, such as ALTER SYSTEM and CREATE USER
ORA$DICTIONARY_SENS_COL_ACCESS – audits access to columns containing password hashes and sensitive data
ORA_DV_DEFAULT_PROTECTION – Oracle Data Vault Only
ORA_DV_SCHEMA_CHANGES – Oracle Data Vault Only

There are a further 10 built-in policies (in 23.3)

SELECT DISTINCT policy_name FROM audit_unified_policies;

POLICY_NAME
ORA$DICTIONARY_SENS_COL_ACCESS
ORA_ACCOUNT_MGMT
ORA_ALL_TOPLEVEL_ACTIONS
ORA_CIS_RECOMMENDATIONS
ORA_DATABASE_PARAMETER
etc...

Their precise function can be determined by querying view AUDIT_UNIFIED_POLICIES. The two most notable are ORA_CIS_RECOMMENDATIONS and ORA_STIG_RECOMMENDATIONS. Enabling these two policies, and creating a policy to audit database LOGINS and LOGOUTS (in 23.3, enable ORA_LOGON_LOGOFF) and you will be reasonably covered for basic audit as recommended by the Centre for Internet Security (CIS) and Security Technical Implementation Guide (STIG) – the basic compliance for the US Department of Defense (sic)

SELECT policy_name,audit_option,nvl(object_name,'N/A') object_name,audit_option_type
FROM audit_unified_policies
WHERE policy_name = 'ORA_CIS_RECOMMENDATIONS'
ORDER BY 1,2;

POLICY_NAME AUDIT_OPTION OBJECT_NAM AUDIT_OPTION_TYPE
ORA_CIS_RECOMMENDATIONS ALTER DATABASE LINK NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER FUNCTION NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PACKAGE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PACKAGE BODY NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PROCEDURE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PROFILE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER ROLE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER SYNONYM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER SYSTEM NONE SYSTEM PRIVILEGE
ORA_CIS_RECOMMENDATIONS ALTER SYSTEM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER TRIGGER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER USER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE DATABASE LINK NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE FUNCTION NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PACKAGE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PACKAGE BODY NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PROCEDURE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PROFILE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE ROLE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE SYNONYM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE TRIGGER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE USER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP DATABASE LINK NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP FUNCTION NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PACKAGE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PACKAGE BODY NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PROCEDURE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PROFILE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP ROLE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP SYNONYM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP TRIGGER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP USER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS GRANT NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS REVOKE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS SELECT ANY DICTIONARY NONE SYSTEM PRIVILEGE

To create your own policy is straightforward. Here’s two I recommend creating:

neil_logins: this will capture any connection to the database. As well as identifying all connections (you really want to know who is in the database!), it can also be very helpful when troubleshooting connectivity issues.

neil_any_tab_privs: is anyone using the ANY privilege to access your data? Surely they should be accessing data via explicitly granted privilege. Unless this is a DBA doing some ad-hoc performance tuning or data validation/reconciliation (ideally with an assigned ticket!), you shouldn’t see any records for this policy.

CREATE AUDIT POLICY neil_logons
ACTIONS
LOGON, LOGOFF
CONTAINER=CURRENT;

CREATE AUDIT POLICY neil_any_tab_privs
PRIVILEGES
select any table, read any table, insert any table, delete any table, update any table
CONTAINER=CURRENT;

-- and enable them
audit policy neil_logons;
audit policy neil_any_tab_privs;

Housekeeping

Now you’re gathering audit. Depending upon the nature of the activity in your database, you could have a LOT of data. You really need to ensure you are managing it correctly – starting with housekeeping. First of all, it’s worth putting your audit data into its own tablespace. You also need a steer how much of it to keep within the database for easy access, and ensure it’s easy to tidy up. You also need to consider the overspill area – if you cannot write to the DB (e.g. tablespace full, database in startup, read-only standby), you cannot write the audit. In this case the audit gets written to O/S files in “$ORACLE_BASE/audit/[instance-name]/[PDB-GUID]”, which need loading into the DB to tidy them up too:

Set the partition frequency to daily - each day gets a new partition. This doesn't speed up deletes, but it does make the cleanup tidier. The partition column is EVENT_TIMESTAMP (i.e. don't use EVENT_TIMESTAMP_UTC for your queries):

begin
DBMS_AUDIT_MGMT.ALTER_PARTITION_INTERVAL
 (INTERVAL_NUMBER   => 1,
  INTERVAL_FREQUENCY => 'DAY');
end;
/
Move the creation of new partitions to a dedicated tablespace:

begin
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION
 (AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED,
  AUDIT_TRAIL_LOCATION => 'audit_tablespace');
end;
/
Create a DBMS_SCHEDULER job to tidy up:

BEGIN
dbms_scheduler.create_job(q'[AUDIT_HOUSEKEEPING]',
job_type=>'PLSQL_BLOCK', job_action=>
q'[
BEGIN
-- you need to set retention before calling cleanup
dbms_audit_mgmt.set_last_archive_timestamp
   (audit_trail_type       => dbms_audit_mgmt.audit_trail_unified
   ,last_archive_time     => trunc(systimestamp - INTERVAL '3' MONTH));

-- and cleanup
 dbms_audit_mgmt.clean_audit_trail
  (audit_trail_type       => dbms_audit_mgmt.audit_trail_unified
  ,use_last_arch_timestamp => true);

  -- and while we are at it, load overspill files into the table
  DBMS_AUDIT_MGMT.LOAD_UNIFIED_AUDIT_FILES;

END;
]'
,number_of_arguments=>0
,start_date=>trunc(systimestamp + interval '1' day)
,repeat_interval=> q'[FREQ = DAILY; INTERVAL = 1]'
,end_date=>NULL
,job_class=>q'[SCHED$_LOG_ON_ERRORS_CLASS]'
,enabled=>FALSE
,auto_drop=>FALSE
,comments=> q'[Daily clean-up Unified Audit older than 3 months]'
);
COMMIT;
dbms_scheduler.enable(q'[AUDIT_HOUSEKEEPING]');
END;
/

We’re auditing and we are keeping things tidy. Remember to treat each PDB as a separate database for auditing purposes, and audit the CDB too!

The main thing left to do it actually monitor the audit to see if anything is happening that you don’t expect, such are anyone using SELECT ANY TABLE when they should not be, or modifying objects when there’s no approved change. How you do that is very much site specific and up to you when querying UNIFIED_AUDIT_TRAIL.

Addendum

The thing is, audit isn’t just for the auditors. It can be great for troubleshooting or just trying to work out what is going on in a system.

User can’t connect? Are they in the audit (ACTION_NAME=’LOGON’, maybe with a failed attempt)? If they are then they are getting through, past the listener, to the correct database and you have a RETURN_CODE telling you what the problem is (1005 – they didn’t enter a password, 1017 – wrong password, etc).

What to know exactly what’s going on from a 3rd party or ‘black box’ app? Audit everything (not in Prod!), run it, go have a look. You can see every SQL, every procedure call, every object instantiated or removed, just like this fella



Well that’s the end of this mini series on security – I hope you have implemented EVERYTHING on this list. And remember, this is just the starting point! This is the very minimum you need to be doing. Consider your data and what should and should not be easily available. Think about ensuring the audit is held apart from the database (e.g. through loading into Splunk) to prevent DBA’s from changing it or removing it. Consider exactly what you need to audit AND how to query it to spot anomalies and alert on them in real time. And remember to protect Development every bit as much as Production – hackers love getting into a less secure dev db to work out the DB structures and minimise the time they need to spend in Prod when they finally get there… Good Luck!

Oracle Optimizer System Statistics


Oh System statistics! Should we gather them? Should we not?
What do they mean? What are they doing to the optimizer?

First you need to be armed with a piece of information. When Oracle optimizes your SQL, it produces a COST to compare each SQL execution plan. Exactly what is this COST? It’s not seconds. It’s not quite the amount of I/O required. It’s not an arbitrary figure either. Oracle Optimizer Cost is the cost of getting your data in terms of SINGLE BLOCK READS. Remember this for later.

Lets have a look at the system stats defaults :

SELECT sname,pname,pval1 FROM SYS.AUX_STATS$ ORDER BY 1,2;
SNAME                PNAME                               PVAL1
-------------------- ------------------------------ ----------
SYSSTATS_INFO        DSTART
SYSSTATS_INFO        DSTOP
SYSSTATS_INFO        FLAGS                                   0
SYSSTATS_INFO        STATUS
SYSSTATS_MAIN        CPUSPEED
SYSSTATS_MAIN        CPUSPEEDNW                           2911 (this will vary)
SYSSTATS_MAIN        IOSEEKTIM                              10
SYSSTATS_MAIN        IOTFRSPEED                           4096
SYSSTATS_MAIN        MAXTHR
SYSSTATS_MAIN        MBRC
SYSSTATS_MAIN        MREADTIM
SYSSTATS_MAIN        SLAVETHR
SYSSTATS_MAIN        SREADTIM

What we are looking for here is the the 3 metrics highlighted in colour. As we can see, the are not set. By default, we need to calculate those, or know where to find the defaults for those values to get to them. Once we have those 3 metrics, we can calculate the RATIO used by the optimizer to convert MULTIBLOCK READ into a cost metric of SINGLE BLOCK READS.

The CPU speed is used to calculate the COST of the query in terms of CPU required. This is small percentage of the query cost but it may help decide which plan is chosen. This is also converted from CPU cost into a cost metric of SINGLE BLOCK READS.

Lets get the values for MBRC, MREADTIM and SREADTIM.

MBRC is easy. If it is not explicitly set, it uses the init.ora parameter “db_file_multiblock_read_count” and uses that. However, by default (and also my recommendation) this should not be set, meaning Oracle will use the hidden parameter “_db_file_optimizer_read_count” to cost your queries. This defaults to 8. [note: this is not the value used in execution. Oracle attempts to do 1MB reads, and uses the value in “_db_file_exec_read_count” to control the multiblock reads at execution time. For an 8K block size, this is set to 128.

SREADTIM and MREADTIM are calculations based upon information we now have:
SREADTIM = IOSEEKTIM + db_block_size        / IOTFRSPEED = 10+(8192 /4096) = 12
MREADTIM = IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED = 10+(8192*8/4096) = 26

Right! Now we have more information and can calculate a ratio. The multi block cost-per-block. This will allow us to take the number of blocks in (for example) a table scan [DBA_TAB_STATISTICS.BLOCKS statistic] and covert it to the cost metric os SINGLE BLOCK READS, meaning we can compare (for example) a FULL TABLE SCAN’s I/O directly with the I/O required for an Index Range Scan and Table Lookup.

multi-block cost-per-block = 1/MBRC * MREADTIM/SREADTIM = 1/8 * 26/12 = 0.270833

If we pull some data from a 10053 trace, where I have a table scan of a table containing 1,000,000 blocks, we should see the 1,000,000 blocks being converted to 270,833 “single block read blocks” for COST purposes.

[10053] SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for COST_CHECK[COST_CHECK]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Table: COST_CHECK  Alias: COST_CHECK
    Card: Original: 1000000.000000  Rounded: 1000000  Computed: 1000000.000000  Non Adjusted:1000000.000000
  Scan IO  Cost (Disk) =   270835.000000
  Scan CPU Cost (Disk) =   7411440000.000001
  Total Scan IO  Cost  =   270835.000000 (scan (Disk))
                       =   270835.000000
  Total Scan CPU  Cost =   7411440000.000001 (scan (Disk))
                       =   7411440000.000001
  Access Path: TableScan
    Cost:  271041.492812  Resp: 271041.492812  Degree: 0
      Cost_io: 270835.000000  Cost_cpu: 7411440000
      Resp_io: 270835.000000  Resp_cpu: 7411440000
  Best:: AccessPath: TableScan
         Cost: 271041.492812  Degree: 1  Resp: 271041.492812  Card: 1000000.000000  Bytes: 0.000000

Well we were close! There’s a fudge factor in play here, but 270,835 is pretty much 270,833 🙂

We can also work out the CPU element of the COST.
Scan IO  Cost (Disk) =   270835.000000
Cost: 271041.492812
so CPU cost must be 271041.492812 – 270835.000000 = 206.492812

Scan CPU Cost (Disk)=7411440000.000001 so doing the maths to convert to units of “single block read time”…

CPUSPEEDNW = 2,991 (Mhz, but we convert 2,991,000 Khz)

= 7,411,440,000 / 2,991,000 / SREADTIM
=  ‭2,477.913741223671013039 / 12
‭= 206.49281176863925108659311267135‬ cost of CPU in units of SREADTIM

So now you can see how your system statistics can fundamentally change the optimizer by changing the ratio of multiblock-reads to single block reads.

You want to play yourself and see what happens? Of course you do…

Here’s an excel spreadsheet where you can plug in your numbers and see what happens to the ratio between single and multi block reads to see how system stats influence the optimizer. By plugging in different numbers, you will see how complex the interactions are – so what’s the ideal numbers? Use the DEFAULTS in almost all cases** [WARNING – don’t go changing your system stats based on this statement. You might experience a significant amount of plan changes, some of which may be very very bad!]:


This is my version of a script by Franck Pachot, and also based on work by Chris Antognini, which you can run against your system:

select pname,pval1,calculated,formula from sys.aux_stats$ where sname='SYSSTATS_MAIN'
model
reference sga on (
select 'Database Buffers' name,sum(bytes) value from v$sgastat where name in ('shared_io_pool','buffer_cache')
) dimension by (name) measures(value)
reference parameter on (
select name,decode(type,3,to_number(value)) value from v$parameter where name='db_file_multiblock_read_count' and ismodified!='FALSE'
union all
select name,decode(type,3,to_number(value)) value from v$parameter where name='sessions'
union all
select name,decode(type,3,to_number(value)) value from v$parameter where name='db_block_size'
union all
SELECT a.ksppinm name, to_number(b.ksppstvl) value FROM x$ksppi a, x$ksppsv b WHERE a.indx=b.indx AND ksppinm like '_db_file_optimizer_read_count'
)
dimension by (name) measures(value)
partition by (sname) dimension by (pname) measures (pval1,pval2,cast(null as number) as calculated,cast(null as varchar2(120)) as formula) rules(
calculated['MBRC']=coalesce(pval1['MBRC'],parameter.value['db_file_multiblock_read_count'],parameter.value['_db_file_optimizer_read_count'],8),
calculated['MREADTIM']=coalesce(pval1['MREADTIM'],pval1['IOSEEKTIM'] + (parameter.value['db_block_size'] * calculated['MBRC'] ) / pval1['IOTFRSPEED']),
calculated['SREADTIM']=coalesce(pval1['SREADTIM'],pval1['IOSEEKTIM'] + parameter.value['db_block_size'] / pval1['IOTFRSPEED']),
calculated['_multi block Cost per block']=round(1/calculated['MBRC']*calculated['MREADTIM']/calculated['SREADTIM'],4),
calculated['_single block Cost per block']=1,
formula['MBRC']=case when pval1['MBRC'] is not null then 'MBRC' when parameter.value['db_file_multiblock_read_count'] is not null then 'db_file_multiblock_read_count' when parameter.value['_db_file_optimizer_read_count'] is not null then '_db_file_optimizer_read_count (db_file_multiblock_read_count not set, which is good!)' else '= not sure so used 8' end,
formula['MREADTIM']=case when pval1['MREADTIM'] is null then '= IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED' end||' = '||pval1['IOSEEKTIM']||'+('||parameter.value['db_block_size']||'*'||calculated['MBRC']||'/'||pval1['IOTFRSPEED']||')',
formula['SREADTIM']=case when pval1['SREADTIM'] is null then '= IOSEEKTIM + db_block_size / IOTFRSPEED' end||' = '||pval1['IOSEEKTIM']||'+('||parameter.value['db_block_size']||'/'||pval1['IOTFRSPEED']||')',
formula['_multi block Cost per block']='= 1/MBRC * MREADTIM/SREADTIM = 1/'||calculated['MBRC']||' * '||calculated['MREADTIM']||'/'||calculated['SREADTIM'],
calculated['_maximum mbrc']=sga.value['Database Buffers']/(parameter.value['db_block_size']*parameter.value['sessions']),
formula['_maximum mbrc']='= buffer cache blocks/sessions (small cache limiter) = ' || sga.value['Database Buffers']/parameter.value['db_block_size']||'/'||parameter.value['sessions'],
formula['_single block Cost per block']='relative to the multi blovk cost per block. Always 1!',
formula['CPUSPEED']='overrides CPUSPEEDNW when set',
formula['CPUSPEEDNW']='CPU speed Mhz - non workload',
formula['IOSEEKTIM']='IO seek time in ms',
formula['IOTFRSPEED']='IO transfer speed in KB/s',
formula['MAXTHR']='Maximum IO system throughput',
formula['SLAVETHR']='average parallel slave IO throughput'
) order by 1;

** there is a case for gathering ‘exadata’ system stats. Increasing the IOTRFSPEED to 200,000 and changing the MBRC to (probably) 128 or 64 will *really* change the ratios, forcing a lot of full table scans down onto the storage cells, instead of using mediocre indexes. This should be considered (and thoroughly tested) if you have a DW on a dedicated Exadata. From a little more on exadata stats, check this blog post

CDB v PDB

System Statistics are a CDB-only set of statistics. If you think about it, it doesn’t make sense to have different system stats for different PDB as they are supposed to represent the hardware characteristics of your system, which will not vary across PDB’s.


It is generally not recommended that you gather system statistics in most circumstances
(finally correcting bad historic advice from Oracle – thanks for fixing this Nigel!)
Leave them alone to their defaults!