Oracle Statistics Gathering Timeout

January 2024 – Oracle 12C, 19C, 21C, 23C+

Gathering object statistic in Oracle is important. The optimizer needs metadata about the object, such as the amount of rows and number of distinct values in a column, to help it decide the optimum way to access your data. This takes effort, and takes place automatically in scheduled windows, both overnight or at the weekend. These windows have a limited duration, and if there’s a lot of data to analyse to get the requisite metadata, it may run out of time.

Photo of a large table by David Vives on Unsplash

Large tables, with billions of rows, tend to be partitioned. Oracle invented INCREMENTAL statistics gathering to minimise the amount of time which it needs to spend gathering statistics by ignoring partitions which have not changed.

If you have a large partitioned table, it is frequently a good idea to switch on INCREMENTAL statistics. However, the actual behaviours are not always obvious when doing this and the following message formed part of an online help request: “the manual gather(which is incremental stats) on those partition table runs longer as it tries to gather on all the partitions which are not even modified. How should we tackle this scenario“.

To switch on INCREMENTAL stats gathering:

EXEC dbms_stats.set_table_prefs('<schema>','<table>','INCREMENTAL','TRUE')

After the first stats gather, the gather job should only gather stats for partitions which have changed which will minimise the gather time. Global statistics are calculated from the statistics associated with each partition (not subpartition), in conjunction with some stored SYNOPSES to calculate global distinct column values associated with each partition.

Check the stats gathering regularly

Stats gathering fails silently. You need to check it yourself, regularly! To do this query DBA_AUTOTASK_JOB_HISTORY where the JOB_STATUS is “STOPPED”. If you see the JOB_INFO message “Stop job called because associated window was closed”, you should investigate as you ran out of time on the stats gathering. Usually the autotask will catch up at the weekend, when the default window is much longer than the 4 hour weekday window, but you need to understand if old stats are acceptable to your database (some will be, some may not). Importantly, the weekend window may fail too, and then you’re not getting new stats at all.

2 strategies for speeding up the stats gather are to gather stats in parallel, or to gather stats concurrently. Both of these will increase the CPU required by the database to gather the stats, so this needs to be balanced with the workload during the stats gather windows to ensure you are not negatively impacting the application.

Check out the options:

exec DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','AUTOMATIC')
(gathers several tables at the same time using multiple jobs)

exec DBMS_STATS.SET_TABLE_PREFS('<schema>', '<table>', 'DEGREE', n)
(where n is the degree of parallel required, or DBMS_STATS.AUTO_DEGREE)


WARNING: If you are doing ANY PARALLEL processing you really should use Resource Manager to control it

Unexpected Gotchas

Here’s a few gotcha’s you should know if using INCREMENTAL stats gathering:

Problem
Partitions which are not labelled as “stale” might actually be stale. The modification of a single row in a partition will cause the stats to re-gather for that partition. You cannot rely on the ALL_TAB_STATISTICS.STALE_STATS status, which will still show as “NO” under these circumstances.

Solution :
tell the incremental stats gather to use the tables “stale percent” (default 10%, modifiable with table_pref ‘STALE_PERCENT’) before it sees a partition as stale, rather than *any* change:

exec dbms_stats.set_table_prefs
('<schema>','<table>','INCREMENTAL_STALENESS','USE_STALE_PERCENT')


Problem
if the stats on a partition are locked, and a single row in that partition is modified, INCREMENTAL is effectively (silently) disabled as Oracle can’t trust the stats for a global aggregation and you revert to normal “full table” stats gathering.

Solution
tell the stats gather to use stale stats on a locked partition (regardless of the amount of DML) as if it was not locked (and combine with the previous solution):

exec dbms_stats.set_table_prefs
('schema','table','INCREMENTAL_STALENESS','USE_STALE_PERCENT,USE_LOCKED_STATS';


Problem
a new histogram appears due to a SQL having a previously unused predicate (column in the WHERE clause) and the METHOD_OPT is default (for all columns size auto), Oracle may create a new histogram automatically.

Solution
check which predicates are candidates for a histogram (in SYS.COL_USAGE$) and explicitly control your histograms. This does mean that you don’t get new histograms automatically, which might not be your intention, but you will only get the histograms that you ask for.

exec dbms_stats.set_table_prefs('<schema>','<table>','METHOD_OPT',
  'FOR ALL COLUMNS SIZE 1
   FOR COLUMNS SIZE 256  colA,colQ
   FOR COLUMNS SIZE 2000 colB,colZ');


Problem
there are job(s) gathering stats, as well as the auto task, and the job is passing in parameters which are causing the issue (rather than relying on table or global “prefs” for consistency)

Solution
Consider ignoring any command line parameters for stats gather jobs. In this way, every stats gather will only use the preferences and will be consistent across each gather

dbms_stats.set_global_prefs(‘PREFERENCE_OVERRIDES_PARAMETER‘,’TRUE’);
or
dbms_stats.set_table_prefs(‘<schema>’,'<table>’,’PREFERENCE_OVERRIDES_PARAMETER‘,’TRUE’);


To validate what is going on, you can always (carefully!) trace the stats gather, either at a table level or globally using the UNDOCUMENTED preference “trace”. Be aware that you may get a lot of trace file output and a potential performance hit on the stats gather job too.

exec dbms_stats.set_global_prefs('trace',65532);

Finally, when you switch on INCREMENTAL statistics, Oracle needs to store additional SYNOPSES data to allow it to calculate the number of distinct values in a column more easily. This can take a lot of space in SYSAUX, although this has been significantly improved in Oracle 12.2 onwards with the synopses tables occupying around 3-5% of their previous large footprint. For more information about SYNOPSES, Oracle Support maintains a note with a query: 1953961.1 and for more information about INCREMENTAL stats in general, the Oracle Optimizer PM Nigel Bayliss has some excellent blog posts from 2020 here

Fundamental Security Part Eight – Unified Audit

January 2024 – Oracle 19C+

If you are Oracle 12.1 onwards, and you have not explicitly disabled it, you are using Unified Audit

We previously discussed Data Encryption At Rest. Now lets talk about Unified Audit, and why you should be using it


When you create an Oracle database, there is a very small amount of Unified Auditing pre-enabled, and no “traditional” auditing. But why audit? Well, do you know what’s happening in the database? Who is connecting? If anyone is accessing data that they should not? Changing Database Objects? Creating backdoors? What is lurking beneath the surface?

Traditional Oracle Auditing:
It’s deprecated in 21C and desupported in 23C. Stop using it. It’s difficult
to gather all of the data from several diverse locations
in the Database and on the O/S – and much worse if
you have multiple RAC clusters! Convert to Unified Audit.
See MOS: 2909718.1 for a converter

Unified Auditing

For some information about enabling Unified Audit in Pure mode and other bits, check this blog post

Unified Auditing uses Policies to decide what to audit. You can enable and disable these if you have appropriate AUDIT_ADMIN privileges. Typically there are 3 enabled by 23C (the exact enablement varies by release and installed options).

SQL> select * from audit_unified_enabled_policies;

POLICY_NAME ENABLED_OPT ENTITY_NAME ENTITY SUC FAI
ORA_LOGON_FAILURES BY USER ALL USERS USER NO YES
ORA_SECURECONFIG BY USER ALL USERS USER YES YES
ORA$DICTIONARY_SENS_COL_ACCESS BY USER ALL USERS USER YES YES <-23C
ORA_DV_DEFAULT_PROTECTION BY USER ALL USERS USER YES YES <- if DV
ORA_DV_SCHEMA_CHANGES BY USER ALL USERS USER YES YES <- if DV

ORA_LOGON_FAILURES – detects failed logins
ORA_SECURECONFIG – audits high-level DBA type actions, such as ALTER SYSTEM and CREATE USER
ORA$DICTIONARY_SENS_COL_ACCESS – audits access to columns containing password hashes and sensitive data
ORA_DV_DEFAULT_PROTECTION – Oracle Data Vault Only
ORA_DV_SCHEMA_CHANGES – Oracle Data Vault Only

There are a further 10 built-in policies (in 23.3)

SELECT DISTINCT policy_name FROM audit_unified_policies;

POLICY_NAME
ORA$DICTIONARY_SENS_COL_ACCESS
ORA_ACCOUNT_MGMT
ORA_ALL_TOPLEVEL_ACTIONS
ORA_CIS_RECOMMENDATIONS
ORA_DATABASE_PARAMETER
etc...

Their precise function can be determined by querying view AUDIT_UNIFIED_POLICIES. The two most notable are ORA_CIS_RECOMMENDATIONS and ORA_STIG_RECOMMENDATIONS. Enabling these two policies, and creating a policy to audit database LOGINS and LOGOUTS (in 23.3, enable ORA_LOGON_LOGOFF) and you will be reasonably covered for basic audit as recommended by the Centre for Internet Security (CIS) and Security Technical Implementation Guide (STIG) – the basic compliance for the US Department of Defense (sic)

SELECT policy_name,audit_option,nvl(object_name,'N/A') object_name,audit_option_type
FROM audit_unified_policies
WHERE policy_name = 'ORA_CIS_RECOMMENDATIONS'
ORDER BY 1,2;

POLICY_NAME AUDIT_OPTION OBJECT_NAM AUDIT_OPTION_TYPE
ORA_CIS_RECOMMENDATIONS ALTER DATABASE LINK NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER FUNCTION NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PACKAGE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PACKAGE BODY NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PROCEDURE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PROFILE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER ROLE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER SYNONYM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER SYSTEM NONE SYSTEM PRIVILEGE
ORA_CIS_RECOMMENDATIONS ALTER SYSTEM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER TRIGGER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER USER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE DATABASE LINK NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE FUNCTION NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PACKAGE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PACKAGE BODY NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PROCEDURE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PROFILE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE ROLE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE SYNONYM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE TRIGGER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE USER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP DATABASE LINK NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP FUNCTION NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PACKAGE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PACKAGE BODY NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PROCEDURE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PROFILE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP ROLE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP SYNONYM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP TRIGGER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP USER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS GRANT NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS REVOKE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS SELECT ANY DICTIONARY NONE SYSTEM PRIVILEGE

To create your own policy is straightforward. Here’s two I recommend creating:

neil_logins: this will capture any connection to the database. As well as identifying all connections (you really want to know who is in the database!), it can also be very helpful when troubleshooting connectivity issues.

neil_any_tab_privs: is anyone using the ANY privilege to access your data? Surely they should be accessing data via explicitly granted privilege. Unless this is a DBA doing some ad-hoc performance tuning or data validation/reconciliation (ideally with an assigned ticket!), you shouldn’t see any records for this policy.

CREATE AUDIT POLICY neil_logons
ACTIONS
LOGON, LOGOFF
CONTAINER=CURRENT;

CREATE AUDIT POLICY neil_any_tab_privs
PRIVILEGES
select any table, read any table, insert any table, delete any table, update any table
CONTAINER=CURRENT;

-- and enable them
audit policy neil_logons;
audit policy neil_any_tab_privs;

Housekeeping

Now you’re gathering audit. Depending upon the nature of the activity in your database, you could have a LOT of data. You really need to ensure you are managing it correctly – starting with housekeeping. First of all, it’s worth putting your audit data into its own tablespace. You also need a steer how much of it to keep within the database for easy access, and ensure it’s easy to tidy up. You also need to consider the overspill area – if you cannot write to the DB (e.g. tablespace full, database in startup, read-only standby), you cannot write the audit. In this case the audit gets written to O/S files in “$ORACLE_BASE/audit/[instance-name]/[PDB-GUID]”, which need loading into the DB to tidy them up too:

Set the partition frequency to daily - each day gets a new partition. This doesn't speed up deletes, but it does make the cleanup tidier. The partition column is EVENT_TIMESTAMP (i.e. don't use EVENT_TIMESTAMP_UTC for your queries):

begin
DBMS_AUDIT_MGMT.ALTER_PARTITION_INTERVAL
 (INTERVAL_NUMBER   => 1,
  INTERVAL_FREQUENCY => 'DAY');
end;
/
Move the creation of new partitions to a dedicated tablespace:

begin
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION
 (AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED,
  AUDIT_TRAIL_LOCATION => 'audit_tablespace');
end;
/
Create a DBMS_SCHEDULER job to tidy up:

BEGIN
dbms_scheduler.create_job(q'[AUDIT_HOUSEKEEPING]',
job_type=>'PLSQL_BLOCK', job_action=>
q'[
BEGIN
-- you need to set retention before calling cleanup
dbms_audit_mgmt.set_last_archive_timestamp
   (audit_trail_type       => dbms_audit_mgmt.audit_trail_unified
   ,last_archive_time     => trunc(systimestamp - INTERVAL '3' MONTH));

-- and cleanup
 dbms_audit_mgmt.clean_audit_trail
  (audit_trail_type       => dbms_audit_mgmt.audit_trail_unified
  ,use_last_arch_timestamp => true);

  -- and while we are at it, load overspill files into the table
  DBMS_AUDIT_MGMT.LOAD_UNIFIED_AUDIT_FILES;

END;
]'
,number_of_arguments=>0
,start_date=>trunc(systimestamp + interval '1' day)
,repeat_interval=> q'[FREQ = DAILY; INTERVAL = 1]'
,end_date=>NULL
,job_class=>q'[SCHED$_LOG_ON_ERRORS_CLASS]'
,enabled=>FALSE
,auto_drop=>FALSE
,comments=> q'[Daily clean-up Unified Audit older than 3 months]'
);
COMMIT;
dbms_scheduler.enable(q'[AUDIT_HOUSEKEEPING]');
END;
/

We’re auditing and we are keeping things tidy. Remember to treat each PDB as a separate database for auditing purposes, and audit the CDB too!

The main thing left to do it actually monitor the audit to see if anything is happening that you don’t expect, such are anyone using SELECT ANY TABLE when they should not be, or modifying objects when there’s no approved change. How you do that is very much site specific and up to you when querying UNIFIED_AUDIT_TRAIL.

Addendum

The thing is, audit isn’t just for the auditors. It can be great for troubleshooting or just trying to work out what is going on in a system.

User can’t connect? Are they in the audit (ACTION_NAME=’LOGON’, maybe with a failed attempt)? If they are then they are getting through, past the listener, to the correct database and you have a RETURN_CODE telling you what the problem is (1005 – they didn’t enter a password, 1017 – wrong password, etc).

What to know exactly what’s going on from a 3rd party or ‘black box’ app? Audit everything (not in Prod!), run it, go have a look. You can see every SQL, every procedure call, every object instantiated or removed, just like this fella



Well that’s the end of this mini series on security – I hope you have implemented EVERYTHING on this list. And remember, this is just the starting point! This is the very minimum you need to be doing. Consider your data and what should and should not be easily available. Think about ensuring the audit is held apart from the database (e.g. through loading into Splunk) to prevent DBA’s from changing it or removing it. Consider exactly what you need to audit AND how to query it to spot anomalies and alert on them in real time. And remember to protect Development every bit as much as Production – hackers love getting into a less secure dev db to work out the DB structures and minimise the time they need to spend in Prod when they finally get there… Good Luck!

Fundamental Security Part Seven– Data Encryption At Rest

October 2023 – Oracle 19C+

Photo by Sajad Nori on Unsplash

What is Data Encryption At Rest and why should we use it?

We have looked at Data Encryption in Transit, encrypting your network traffic. Everyone should be doing this. But what about Encrypting Data At Rest? Data that is stored permanently (on your hard drives).


The simplest way to do this, from a database/DBA perspective, is to use an encrypted storage device. Most SAN/NAS devices come with encryption built-in, and encryption of the storage presented to servers is abstracted away to the SAN. The CPU overhead is handled by the SAN and you don’t need to worry about it. However, if you are using Exadata or any Oracle Cloud Service, this is not an option. You will need to implement Oracle Transparent Data Encryption. This is mandatory with the Oracle Cloud. Some cloud providers provide a Storage-Level encryption option (which is frequently an added cost and is usually not the default!)

Why Do This?

There are 3 reasons to implement Encryption At Rest

1. In case the hardware gets stolen. This may not just be directly from your data centre, but following the decommissioning or break/replace of the hardware too.
2. Bypassing database controls to read or amend the data in your database directly from the files.
3. Because the auditors insist upon it (see points 1 and 2)


Storage-Level Encryption [SLE] v Oracle Transparent Data Encryption [TDE]

Neither Storage-Level Encryption [SLE] nor TDE means your data is safe. The processes running against your data need to see it unencrypted. If the processes can do this, so can suitably privileged Operating System administrators.

No matter what security you have put within your databases to prevent unauthorised access to your data, if your data is unencrypted (or encrypted using SLE where it is presented to the server “unencrypted”), you can run a “dd” (in Linux, other O/S have similar commands) and view the data directly. Don’t think for a moment that because it is in an Oracle internal format that it is difficult to read.

> create table read_me (c1 varchar2(1000)) tablespace users
;
Table created.

> insert into read_me values ('I CAN SEE YOU!');
1 row created.

> commit;
Commit complete.

select segment_name,block_id,blocks from dba_extents where segment_name = 'READ_ME';

SEGMENT_NAME                     BLOCK_ID     BLOCKS
------------------------------ ---------- ----------
READ_ME                               136          8

$ dd if=./users01.dbf bs=8192 skip=136 count=8 | od -xa --endian=big

*
0157740    0000    0000    0000    0000    0000    2c01    010e    4920
        nul nul nul nul nul nul nul nul nul nul   , soh soh  so   I  sp
0157760    4341    4e20    5345    4520    594f    5521    0106    7004
          C   A   N  sp   S   E   E  sp   Y   O   U   ! soh ack   p eot
0160000    06a2    0000    8f00    c003    6f04    4200    0000    0204
        ack   " nul nul  si nul   @ etx   o eot   B nul nul nul stx eot
0160020    73e2    0000    0100    0000    4426    0100    6f04    4200
          s   b nul nul soh nul nul nul   D   & soh nul   o eot   B nul
0160040    0080    0000    0200    3200    8800    c003    0000    0000
        nul nul nul nul stx nul   2 nul  bs nul   @ etx nul nul nul nul
0160060    0000    0000    0000    0000    0000    0000    0000    0000
        nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul
*

Encrypting your data on the storage layer DOES NOT PREVENT this sort of access or potential attack.

Oracle Transparent Data Encryption [TDE]

First of all, Oracle Transparent Data Encryption is a COST OPTION. It is not part of the standard Oracle license.

TDE encrypts the actual Oracle data files. Running a “dd” against an encrypted tablespace reveals no usable information. However, you now need to handle all of the keys to be able to open your databases. Key management is the most important aspect of using TDE, ensuring they are available and are not compromised. If you lose the keys, your data is unrecoverable. Remember to backup the keys at the same time (but not with!) as the database as you will need the relevant key should you need to recover the database.

Important note: Most companies who implement TDE do it at the tablespace level. This means the data on disk is encrypted and the CPU overhead is generally low. However, data in memory is NOT encrypted. In Linux, as memory is effectively treated as a file by the O/S, it is possible to read whatever is in your buffer cache using the “dd” method. If you are very skilled (hackers are!), this is also possible from the dom0 layer of the hypervisor, which is accessible by the staff of the cloud provider. Only data encrypted at a column level remains encrypted within the buffer cache. The overhead for accessing column-level encrypted data is high.

It is debatable whether data in the server memory is “at rest”. If you have extremely valuable data, such as credit card information, this should be encrypted by the application (for example by adhering to PCI DSS standards).

For a basic implementation of TDE, I would recommend looking at Tim’s article here (rather than me repeat his excellent website) or one from an Oracle PM here.

When I have time, I’ll do a follow-on article about using a HSM or OKV to manage your keys.


Now I would like to talk about Fundamental Security Part Eight – Unified Audit

Fundamental Security Part Six– Network Encryption

August 2023 – Oracle 19C

We have considered observability. Knowing where you may have a security problem. We now need to look at removing some of the main problems associated with Oracle databases. Lets start with Network Security.

By default, all data travelling over the network with Oracle is unencrypted. This means that anyone with something like Wireshark can inspect packets and read your data as it travels between the client/application and the database. This is not secure. You are going to want to encrypt this data.

There are 2 options when it comes to encryption; implement Transport Layer Security (TLS) using certificates or use Oracle Native Network Encryption and Integrity (note: these used to be part of the Advanced Security Option but is now free with all supported Oracle versions from Oracle 10 onwards)


Transport Layer Security (TLS)

This involves generating and signing a certificate, and putting it into an Oracle Wallet on the database server, and also including the same certificate on each Oracle client or application server needing to connect to the database. You then need to do the same on each client and add the certificate to the database server wallet. This can provide full standard TLS encryption, but at a cost of certificate management. Certificates must expire, and many companies require certificates to be signed by a Certificate Authority, which should impose a maximum lifespan of 1 year on the certificate. When the certificates expire, they need to be replaced. It is possible to link directly to a CA to assist with this, depending upon your setup, but that is beyond the scope of this blog post. Maybe later…

It’s a good idea to use certificate management software to perform this function, otherwise you probably have a never-ending task “like painting the Forth Bridge“.

WARNING! One of the commonest causes of system
outage in the IT industry is an expired certificate

For an example of implementing TLS, please check-out this blog post by Tim Hall

Note that TLS will require you to make both server-side and client-side changes (to every client which connects to the database).

To check if your session is running with encrypted TCPS (rather than TCP), you can issue the following command:

SELECT sys_context('USERENV', 'NETWORK_PROTOCOL') as network_protocol FROM dual;

NETWORK_PROTOCOL
----------------
tcps

As the following Twitter conversation shows, there are alternate ways to configure TLS. You could just generate your own signed certificate on each box, exchange it and hope nobody compromises it. You could just use a single signed and trusted root cert from your CA. It can be simpler, but there are inevitably trade-offs involved.


Oracle Native Network Encryption and Integrity (ONNEI or ANO)

The first thing you should look to do if you are not encrypting network traffic is to implement Oracle Native Network Encryption and Integrity (formerly Advanced Networking Option – ANO). Largely because it’s so simple, so easy to implement and will also show where you have older clients which need to be upgraded.

So what does the name mean? Encryption and Integrity. When you encrypt data you need good Encryption. You also need to ensure nobody has changed the data in transit. Integrity (or checksums). The industry standard algorithms at the moment are AES256 for encryption and SHA384 for crypto-checksumming.

Configuring ONNEI

One of the advantages of using the native encryption is that you can enable this server-side only. You may not need to make any changes at all to the clients, if the client or (JDBC) drivers are sufficiently up to date. Setup in REQUESTED mode to start with, so any non-compliant clients will still connect unencrypted.

Edit the sqlnet.ora file on the tns listener home on the database server(s) and add:

SQLNET.ENCRYPTION_SERVER = REQUESTED
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256)
SQLNET.CRYPTO_CHECKSUM_SERVER = REQUESTED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA384)

NOTE: the sqlnet.ora file is re-read dynamically every time there is a new database connection. Changing the file will immediately cause new connections to encrypt

Monitor connections to ensure they are encrypting:

SELECT sys_context('USERENV', 'NETWORK_PROTOCOL') as network_protocol FROM dual;

NETWORK_PROTOCOL
----------------
tcp

Ah! We are not using TLS so we aren’t encrypting as TCPS. The encryption is Oracle Native. The packets are not fully encrypted. Only the contents (your data!) are subject to encryption. We therefore need another method.

Looking at GV$SESSION_CONNECT_INFO, we can see 2 lines which would not normally be present, showing AES256 and SHA384 encryption. The SQL below restricts the output to your session. Remove the WHERE clause and you can see the encryption for every session. (You can sort-of use this view to see if the connection is using TCPS as the line “TCP/IP NT Protocol Adapter for Linux:…..” will not exist.)

Note that we are using REQUESTED and not REQUIRED at this stage, so some connections may have only 3 lines of output, some may have 5. Also, if you have note specified the level of encryption required, this is also reflected in the output (e.g. in 19C, the default is AES256 but the weak algorithm for checksumming, SHA1)

ONNEI

SELECT sys_context('USERENV', 'NETWORK_PROTOCOL') as network_protocol FROM dual;

NETWORK_PROTOCOL
----------------
tcp

SELECT network_service_banner FROM gv$session_connect_info 
      WHERE sid IN (SELECT DISTINCT sid FROM v$mystat) ORDER BY 1;

NETWORK_SERVICE_BANNER
----------------------------------------------------------------------------
AES256 Encryption service adapter for Linux: Version 19.0.0.0.0 - Production
Crypto-checksumming service for Linux: Version 19.0.0.0.0 - Production
Encryption service for Linux: Version 19.0.0.0.0 - Production
SHA384 Crypto-checksumming service adapter for Linux: Version 19.0.0.0.0 - P
TCP/IP NT Protocol Adapter for Linux: Version 19.0.0.0.0 - Production

TLS

SELECT sys_context('USERENV', 'NETWORK_PROTOCOL') as network_protocol FROM dual;

NETWORK_PROTOCOL
----------------
tcps

SELECT network_service_banner FROM gv$session_connect_info
WHERE sid IN (SELECT DISTINCT sid FROM v$mystat) ORDER BY 1;

NETWORK_SERVICE_BANNER
----------------------
Crypto-checksumming service for Linux: Version 19.0.0.0.0 - Production
Encryption service for Linux: Version 19.0.0.0.0 - Production

(if you are using both ONNEI and TLS, you may also get the SHA384 line)
One you are happy that all connections are encrypting successfully, change from REQUESTED to REQUIRED. This will reject any connection attempts which cannot encrypt.
SQLNET.ENCRYPTION_SERVER = REQUIRED
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256)
SQLNET.CRYPTO_CHECKSUM_SERVER = REQUIRED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA384)

All done. 1 or 2 changes to sqlnet.ora and you’re enforcing encryption over the network.

You can also confirm the protocol (tcp v tcps) used if you are auditing successful logons (you should be!)

SELECT dbusername,authentication_type FROM unified_audit_trail;

dbu  authentication_type
---  -------------------
TIM  (TYPE=(DATABASE));(CLIENT ADDRESS=((ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.194)(PORT=63849))));
NEIL (TYPE=(DATABASE));(CLIENT ADDRESS=((ADDRESS=(PROTOCOL=tcps)(HOST=192.168.56.193)(PORT=44317))));

Performance Impact

All encryption comes with a CPU requirement. You will burn more CPU on the client and server encrypting and decrypting traffic. You also add a few milliseconds latency at connect time for the handshake (although for most databases systems using stable connection pools, this should not be noticeable)

Extra CPU is usually in the 1%-5% range for Oracle DB network encryption, although you do need to test as it will vary across applications. There is minimal difference between using TLS and ONNEI, although the TLSv1.3 handshake has been performance optimised over TLSv1.2, I have not re-tested for any performance difference at this point and if you are using connection pooling there will be no noticeable difference.

Conclusion – what should I use?

Both. One of the important things to remember about ONNEI is that it doesn’t use TLS, so it’s not using an “industry standard” protocol. Your security team may insist upon having TLSv1.2 or 1.3. However, it is encrypting/checksumming to the same standard so there is very little to choose between the 2 methods.

If you use pre-exchanged TLS certificates associated only with each server, you do have the advantage that you cannot fall foul of a man-in-the-middle attack. This is possible, although difficult, with ONNEI. Simple eavesdropping will not be possible.

However, you don’t need to choose to get started. You can set the sqlnet.ora parameter:

IGNORE_ANO_ENCRYPTION_FOR_TCPS=TRUE

This allows ONNEI to be used if the connection if not TLS encrypted, but ignores the ONNEI config for already encrypted connections.

So, setup ONNEI, monitor it for old client connections, then consider if you need to explicitly encrypt using TLS for some or all of your connections with the overhead that entails.

Next lets talk about Data Encryption At Rest

Fundamental Security Part Five – Observability

July 2023 – Oracle 19C

Now you know what permissions have been granted in your system and to whom, as shown in the previous post. Maybe you have remediated them down to least privilege. Now, how do you ensure the system stays like that? How do you know if you have succeeded in stripping the permissions and users back successfully?

We are going to need to monitor the system and see if anything changes.

Options

You are going to need a consistent and repeatable report and/or dashboard to show you the current state of your estate. There are a number of options, each with pros and cons. Lets talk about DBSAT, Oracle Data Safe, and briefly; Splunk, and doing it yourself.

DBSAT

A good starting point is to use DBSAT. This is a free, semi-supported tool supplied by Oracle which you can run against each database [note: for multitenant, you need to do this against the CDB and every PDB individually] to get a report showing where you have/failed to comply with CIS or STIG rules. The home page is here: https://www.oracle.com/uk/database/technologies/security/dbsat.html [MOS: 2138254.1]

DBSAT can be used in a number of different ways. The simple way is to generate the output from each database and then convert it into a HTML document which you can analyse. It will report pass and failure states, plus indicate where you need to perform more analysis on items as there is no clear “correct” way to implement some elements of security. For example, DBMS_SQL is regarded as a potentially insecure package as badly coded input can allow SQL Injection attacks. As a result, this will be flagged for analysis rather than be an outright failure.

If you have a database called UTF8, with a PDB called UTF8PDB1, you will need to run the collect twice. Once for the CDB and once for each PDB. I am using “/ as sysdba” but you may want to have a dedicated account if you are doing this regularly. I am using “-n” so no password is required for the zip. Given the extremely sensitive nature of the output, which would be very useful to hackers, you may want to ensure the output is password protected.

For the CDB (called UTF8):
$ export ORACLE_SID=UTF8
$ unset ORACLE_PDB_SID
$ ./dbsat collect -n “/ as sysdba” UTF8
this produces a file “UTF8.json”

$ ./dbsat report -a -n UTF8
this produces output based upon the “UTF8.json file”

For the PDB (called UTF8PDB1):
$ export ORACLE_SID=UTF8
$ export ORACLE_PDB_SID=UTF8PDB1
$ ./dbsat collect -n “/ as sysdba” UTF8PDB1
$ ./dbsat report -a -n UTF8PDB1

Typical output

$ ls -1 UTF8PDB1*
UTF8PDB1.json
UTF8PDB1_report.html
UTF8PDB1_report.json
UTF8PDB1_report.txt
UTF8PDB1_report.xlsx

CDB Output Header (HTML version) – note the Container identifying this as a CDB.

PDB Output header (txt version). Note the Container showing the PDB name and ID.

We can see that there’s a high risk on the User Accounts. Looking at that section of the report tells us the problem

We can also see a number with the status of Advisory and Evaluate, which need to be manually read, understood and (maybe) resolved. Changing the below Advisory without due diligence may cause your application to stop functioning as it may be using an older client. However, it indicates you need to upgrade the client and secure the database further:

There are two of DBSAT Companion utilities which are OK, but could be more useful:

dbsat_diff, which shows the difference between 2 report files. This should be useful for monitoring change, but as it shows the different in run times and ages it kind-of falls down a bit. Here there’s no fundamental change to CRYPT.TDE, but it’s still reported.

$ ./dbsat_diff UTF8PDB1-run1_report.json UTF8PDB1-run2_report.json

[this bit is useful showing that the SCOTT user has been fixed]
USER.DEFPWD: Users with Default Passwords
< Status: High Risk
< Summary:
<     Found 1 unlocked user account with default password.
< Details:
<     Users with default password: SCOTT
---
> Status: Pass
> Summary:
>     No unlocked user accounts are using default password.


[this is less useful - as nothing has changed except the run date...]
CRYPT.TDE: Transparent Data Encryption
< Status: Evaluate
< Summary:
<     Found 2 encrypted tablespaces. Found 7 unencrypted tablespaces. Found 1
<         encrypted column. Examined 1 initialization parameter.
< Details:
<     Unencrypted tablespaces: DATA, GONE, INDEXES, SYSAUX, SYSTEM, TEMP,
<         UNDOTBS1
<     Encrypted tablespaces: ENC_TS (AES256), USERS (AES256)
<
<     Encrypted columns: Column C2 of NEIL.T_ENC (AES 192 bits key)
<
<     ENCRYPT_NEW_TABLESPACES = CLOUD_ONLY. Recommended value is ALWAYS for
<         databases running on-premises.
<     It has been 471 days since master encryption key was last rotated.
---
> Status: Evaluate
> Summary:
>     Found 2 encrypted tablespaces. Found 7 unencrypted tablespaces. Found 1
>         encrypted column. Examined 1 initialization parameter.
> Details:
>     Unencrypted tablespaces: DATA, GONE, INDEXES, SYSAUX, SYSTEM, TEMP,
>         UNDOTBS1
>     Encrypted tablespaces: ENC_TS (AES256), USERS (AES256)
>
>     Encrypted columns: Column C2 of NEIL.T_ENC (AES 192 bits key)
>
>     ENCRYPT_NEW_TABLESPACES = CLOUD_ONLY. Recommended value is ALWAYS for
>         databases running on-premises.
>     It has been 472 days since master encryption key was last rotated.

dbsat_extract, which is helpful looking for specific sections and pulling only specified output.

Extracting 2 sections. Adding -v would give the Details as well as the summary.

$ ./dbsat_extract -i CRYPT.TDE -i USER.DEFPWD UTF8PDB1-2_report.json
=== UTF8PDB1-2_report.json: UTF8 UTF8PDB1 (PDB:3) Wed May 10 2023 19:20:49 UTC+01:00

CRYPT.TDE: Transparent Data Encryption
| Status: Evaluate
| Summary:
|     Found 2 encrypted tablespaces. Found 7 unencrypted tablespaces. Found 1
|         encrypted column. Examined 1 initialization parameter.

USER.DEFPWD: Users with Default Passwords
| Status: Pass
| Summary:
|     No unlocked user accounts are using default password.

DATA SAFE

Oracle offer a (relatively cheap) cloud product to purchase called Data Safe. It is basically the same as DBSAT but has an automated dashboard which will allow you so see exactly how you are performing against CIS/STIG benchmarks with minimal internal effort. It doesn’t fix anything, but is a easier implementation route. Note that it is a cloud product, so if you intend to use it monitor on-premises databases you will need to provide a route from Oracle Cloud Infrastructure to your database to allow it to be monitored

SPLUNK

Another alternative would be to use something like Splunk/Grafana, but feed it the JSON output from the DBSAT collection to allow you to have your own dashboard/graphing showing risk and advisory without having to live with the extended output from DBSAT.

DIY

If you are a bit old-school and very much like creating this sort of thing yourself, its just a matter of understanding the SQL’s needed to run to collect the information in whatever format you like. The SQL code ran by DBSAT is held within a file called “sat_collector.sql”, which is a very good place to start. There is also SQL within the CIS Benchmarks, which can help. There are advantages to this route, as it will also allow application and company specific information to be gathered (e.g. if you are extracting audit records regularly, ensure that the audit extraction account is actually connecting and extracting them!) However, you will need to maintain the code yourself, as security standards change and databases are upgraded.

OK, so much for monitoring and identifying issues, what about fixing a few of them. Lets start with: Fundamental Security Part Six – Network Encryption

Other Considerations

However, I suspect your system is monitoring more than just Oracle. There is a reasonable chance that you will be monitoring other databases systems, such as Postgres, SQL Server, MySQL, MongoDB and a whole host more. You need to consider the operating system for most database implementations too. You need to decide what the appropriate monitoring solution is for such a configuration, as the solutions above are Oracle-centric. Now, the best way to monitor Oracle overall is using Oracle Enterprise Manager. It has far more rules and knowledge about Oracle than any other monitoring solution, but does not present a security dashboard. There is a reasonable chance that you will want a “single pane of glass” solution to present to management detailing the overall security configuration across your entire estate. As such, a tailored solution using Splunk and Grafana might be appropriate. You may also want to consider 3rd party solutions such as DataDog or dedicated security software like Tenable Nessus. All 3rd party software will present a challenge, having to deal with the ingestion of disparate data formats and coalescing it into a single dashboard. It can be challenging but knowing when configuration changes and vulnerabilities appear is critical to helping secure your infrastructure.

Fundamental Security Part Four – Permissions Checking

July 2023 – Oracle 19C

So in the previous post we talked about identifying unused and historic accounts. Old accounts should be removed, but what are the capabilities of the existing users who are accessing the systems?

A surprisingly large amount of applications request that the schema owner, and maybe the primary application connection, has DBA-level rights. This is always excessive and should be avoided.

Most DBA’s don’t need DBA rights most of the time. It makes a lot of sense to have a company-specific “DBA” role with minimal permissions. Most day-to-day checking tasks are read only. It is sensible to have a fairly low privilege account for day-to-day access and an admin account (preferable accessed via raising a ticket in response to an incident or change request) for when high privilege change is needed.

Your system permissions should be allocated on the principle of least privilege. Only grant those privileges and resources explicitly and legitimately required to perform the function.

Who Has DBA?

SELECT * FROM dba_role_privs 
WHERE granted_role = 'DBA'
ORDER BY grantee;

GRANTEE                        GRANTED_ROLE       ADM DEL DEF COM INH
------------------------------ ------------------ --- --- --- --- ---
APP_SCHEMA                     DBA                NO  NO  YES NO  NO
ERIK                           DBA                NO  NO  YES NO  NO
MARTIN                         DBA                NO  NO  YES NO  NO
NEIL                           DBA                NO  NO  YES NO  NO
TIM                            DBA                NO  NO  YES NO  NO
SYS                            DBA                YES NO  YES YES YES
SYSTEM                         DBA                NO  NO  YES YES YES

Identify who really needs DBA privs? NEIL the DBA does, but does ERIK the Developer need such high level privs? This is unlikely, even in a dedicated Development database.

However, you do need to be aware of other privileges. A CIS-level audit will pick up on DBA privs, but will not necessarily spot who has the IMP_FULL_DATABASE role granted to them (or, more powerfully, DATAPUMP_IMP_FULL_DATABASE). These roles contain very high level privileges such as ALTER DATABASE, DELETE ANY TABLE, and BECOME USER which should be highly restricted.

SELECT * FROM dba_role_privs 
WHERE granted_role = 'IMP_FULL_DATABASE'
ORDER BY grantee;

GRANTEE                        GRANTED_ROLE       ADM DEL DEF COM INH
------------------------------ ------------------ --- --- --- --- ---
DATAPUMP_IMP_FULL_DATABASE     IMP_FULL_DATABASE  NO  NO  YES YES YES
DBA                            IMP_FULL_DATABASE  NO  NO  YES YES YES
SCOTT                          IMP_FULL_DATABASE  NO  NO  YES NO  NO
SYS                            IMP_FULL_DATABASE  YES NO  YES YES YES

PUBLIC Grants

The above roles are fairly well known. A lesser known risk is the amount of procedures with EXECUTE permissions granted to PUBLIC. Anyone with access to the database can use these. However, there are over 2,500 such permissions in Oracle 19.13. The CIS Security standard helps to identify some of the higher risk grants, to packages such as UTL_TCP (which allows access to [corrupt] the TCP stream), UTL_HTTP (which could send information to external websites), DBMS_SQL (which, if used incorrectly, allows SQL Injection attacks), and DBMS_RANDOM (because it’s not very random and shouldn’t be used for any serious work!)

Sample of potentially insecure packages granted to public:

Network Security
DBMS_LDAP 
UTL_INADDR 
UTL_TCP 
UTL_MAIL 
UTL_SMTP 
UTL_DBWS 
UTL_ORAMTS 
UTL_HTTP 
HTTPURITYPE 

File Security
DBMS_ADVISOR
DBMS_LOB
UTL_FILE

Encryption
DBMS_CRYPTO
DBMS_OBFUSCATION_TOOLKIT
DBMS_RANDOM

Java
DBMS_JAVA
DBMS_JAVA_TEST

Scheduler
DBMS_SCHEDULER
DBMS_JOB

SQL Injection Helpers
DBMS_SQL
DBMS_XMLGEN
DBMS_XMLQUERY
DBMS_XLMSTORE
DBMS_XLMSAVE
DBMS_REDACT

(please review the CIS Standards for a list of the packages in your database release)

These grants can be seen in DBA_TAB_PRIVS (with the privilege of EXECUTE – the view isn’t only for tables)
It is a good practice to remove the grant of all of these packages from PUBLIC
[REVOKE EXECUTE ON <pkg> FROM PUBLIC; ]

WARNING: If you revoke these privileges from PUBLIC, you may find that your application stops working. You need to identify if any of the above packages are used and grant them explicitly to only the accounts which need them. The most common grant where you would need to do this would be for DBMS_LOB, but any of the above packages may be used by your apps or users. You need to discover which are being used (running checks against the application source code and view DBA_SOURCE, as well as monitoring the contents of GV$SQL and DBA_HIST_SQLTEXT are good starting points for this.)

Dangerous Grants

There are a number of very powerful and sensitive tables and procedures within Oracle where we need to monitor the grants carefully. Access to these is not granted by default, and we need to ensure this does not happen by accident!

Powerful Procedures
DBMS_BACKUP_RESTORE
DBMS_FILE_TRANSFER
DBMS_SYS_SQL
DBMS_REPCAT_SQL_UTL
INITJVMAUX
DBMS_AQADM_SYS
DBMS_STREAMS_RPC
DBMS_PRVTAQIM
LTADM
DBMS_IJOB
DBMS_PDB_EXEC_SQL

Sensitive Tables
CDB_LOCAL_ADMINAUTH$
DEFAULT_PWD$
ENC$
HISTGRM$
HIST_HEAD$
LINK$
PDB_SYNC$
SCHEDULER$_CREDENTIAL
USER$
USER_HISTORY$
XS$VERIFIERS

You may be wondering what is so sensitive about these tables and procedures. The DBMS_SYS_SQL package could allow a user to run code as a different user without entering valid credentials. Table HISTGRM$ contains samples of (or maybe all) data from table columns. Access to these objects is a notable security risk which needs to be taken seriously.

Here’s a useful bit of SQL which will look for access to these objects

SELECT owner, table_name, grantee, privilege, type FROM dba_tab_privs
WHERE table_name IN ('DBMS_LDAP', 'UTL_INADDR', 'UTL_TCP', 'UTL_MAIL', 'UTL_SMTP', 
'UTL_DBWS', 'UTL_ORAMTS', 'UTL_HTTP', 'HTTPURITYPE', 'DBMS_ADVISOR', 'DBMS_LOB', 
'UTL_FILE', 'DBMS_CRYPTO', 'DBMS_OBFUSCATION_TOOLKIT', 'DBMS_RANDOM', 'DBMS_JAVA',
'DBMS_JAVA_TEST', 'DBMS_SCHEDULER', 'DBMS_JOB', 'DBMS_SQL', 'DBMS_XMLGEN', 
'DBMS_XMLQUERY', 'DBMS_XLMSTORE', 'DBMS_XLMSAVE', 'DBMS_REDACT', 
'CDB_LOCAL_ADMINAUTH$', 'DEFAULT_PWD$', 'ENC$', 'HISTGRM$', 'HIST_HEAD$', 'LINK$', 
'PDB_SYNC$', 'SCHEDULER$_CREDENTIAL', 'USER$', 'USER_HISTORY$', 'XS$VERIFIERS', 'DBMS_BACKUP_RESTORE', 'DBMS_FILE_TRANSFER', 'DBMS_SYS_SQL', 'DBMS_REPCAT_SQL_UTL', 'INITJVMAUX', 'DBMS_AQADM_SYS', 'DBMS_STREAMS_RPC', 'DBMS_PRVTAQIM','LTADM', 'DBMS_IJOB', 'DBMS_PDB_EXEC_SQL')
ORDER BY owner,table_name;


OWN TABLE_NAME                GRANTEE              PRIVILEGE  TYPE
SYS DBMS_ADVISOR              PUBLIC               EXECUTE    PACKAGE
SYS DBMS_BACKUP_RESTORE       SYSBACKUP            EXECUTE    PACKAGE
SYS DBMS_FILE_TRANSFER        EXECUTE_CATALOG_ROLE EXECUTE    PACKAGE
SYS DBMS_IJOB                 IMP_FULL_DATABASE    EXECUTE    PACKAGE
SYS DBMS_JAVA                 PUBLIC               EXECUTE    PACKAGE
SYS DBMS_JOB                  PUBLIC               EXECUTE    PACKAGE
SYS DBMS_LDAP                 PUBLIC               EXECUTE    PACKAGE
SYS DBMS_LOB                  WMSYS                EXECUTE    PACKAGE
SYS DBMS_LOB                  PUBLIC               EXECUTE    PACKAGE
SYS DBMS_LOB                  ORDPLUGINS           EXECUTE    PACKAGE
SYS DBMS_LOB                  ORDSYS               EXECUTE    PACKAGE
SYS DBMS_OBFUSCATION_TOOLKIT  PUBLIC               EXECUTE    PACKAGE
SYS DBMS_PDB_EXEC_SQL         XDB                  EXECUTE    PROCEDURE
SYS DBMS_PDB_EXEC_SQL         AUDSYS               EXECUTE    PROCEDURE
SYS DBMS_RANDOM               ORDSYS               EXECUTE    PACKAGE
SYS DBMS_RANDOM               PUBLIC               EXECUTE    PACKAGE
SYS DBMS_REDACT               IMP_FULL_DATABASE    EXECUTE    PACKAGE
SYS DBMS_REDACT               EXECUTE_CATALOG_ROLE EXECUTE    PACKAGE
SYS DBMS_SCHEDULER            PUBLIC               EXECUTE    PACKAGE
SYS DBMS_SCHEDULER            MDSYS                EXECUTE    PACKAGE
SYS DBMS_SCHEDULER            AUDSYS               EXECUTE    PACKAGE
SYS DBMS_SQL                  AUDSYS               EXECUTE    PACKAGE
SYS DBMS_SQL                  PUBLIC               EXECUTE    PACKAGE
SYS DBMS_SQL                  DVSYS                EXECUTE    PACKAGE
SYS DBMS_SQL                  ORDSYS               EXECUTE    PACKAGE
SYS DBMS_STREAMS_RPC          EXECUTE_CATALOG_ROLE EXECUTE    PACKAGE
SYS DBMS_XMLGEN               PUBLIC               EXECUTE    PACKAGE
SYS DBMS_XMLQUERY             PUBLIC               EXECUTE    PACKAGE
SYS HISTGRM$                  NEIL                 SELECT     TABLE
SYS HTTPURITYPE               PUBLIC               EXECUTE    TYPE
SYS UTL_FILE                  WMSYS                EXECUTE    PACKAGE
SYS UTL_FILE                  PUBLIC               EXECUTE    PACKAGE
SYS UTL_FILE                  ORDPLUGINS           EXECUTE    PACKAGE
SYS UTL_FILE                  ORDSYS               EXECUTE    PACKAGE
SYS UTL_HTTP                  ORDPLUGINS           EXECUTE    PACKAGE
SYS UTL_HTTP                  PUBLIC               EXECUTE    PACKAGE
SYS UTL_INADDR                PUBLIC               EXECUTE    PACKAGE
SYS UTL_INADDR                DVSYS                EXECUTE    PACKAGE
SYS UTL_SMTP                  PUBLIC               EXECUTE    PACKAGE
SYS UTL_TCP                   PUBLIC               EXECUTE    PACKAGE

High Level Permissions

Another source of risk are various high level permissions allowing access to objects outside of your own user. The obvious ones are the “ANY” privileges, such as SELECT ANY TABLE, READ ANY TABLE and EXECUTE ANY PROCEDURE. and GRANT ANY PRIVILEGE.

You need to identify which users have these privileges and consider if a focussed role granting the necessary levels of permissions is a viable solution.

SELECT dsp.grantee
      ,dsp.privilege
      ,dsp.admin_option
      ,decode(dr.role,null,'USER','ROLE') user_or_role
      ,LISTAGG(DISTINCT drp.grantee,', ' ON OVERFLOW TRUNCATE) WITHIN GROUP (ORDER BY drp.grantee) GRANTED_TO
  FROM dba_sys_privs dsp LEFT OUTER JOIN dba_roles dr ON (dsp.grantee=dr.role)
                         LEFT OUTER JOIN dba_role_privs drp ON (dsp.grantee=drp.granted_role)
 WHERE dsp.privilege LIKE '%ANY%'
    -- exclude Oracle users
   AND dsp.grantee NOT IN (SELECT du.username FROM dba_users du WHERE oracle_maintained = 'Y')
 GROUP BY dsp.grantee,dsp.privilege,dsp.admin_option,decode(dr.role,null,'USER','ROLE')
 ORDER BY user_or_role,dsp.grantee,dsp.privilege
/

GRANTEE                    PRIVILEGE            ADM USER GRANTED_TO
-------------------------- -------------------- --- ---- ------------------------------
AQ_ADMINISTRATOR_ROLE      DEQUEUE ANY QUEUE    YES ROLE GSMADMIN_ROLE, GSMCATUSER, MDS
AQ_ADMINISTRATOR_ROLE      ENQUEUE ANY QUEUE    YES ROLE GSMADMIN_ROLE, GSMCATUSER, MDS
AQ_ADMINISTRATOR_ROLE      MANAGE ANY QUEUE     YES ROLE GSMADMIN_ROLE, GSMCATUSER, MDS
AUDIT_ADMIN                AUDIT ANY            NO  ROLE SYS
...
DATAPUMP_IMP_FULL_DATABASE GRANT ANY ROLE       NO  ROLE DBA, GSMADMIN_INTERNAL, SYS
DATAPUMP_IMP_FULL_DATABASE SELECT ANY TABLE     NO  ROLE DBA, GSMADMIN_INTERNAL, SYS
DBA                        ALTER ANY ASSEMBLY   NO  ROLE APP_SCHEMA, DODGY_DBA, NEIL,
...
SCHEDULER_ADMIN            EXECUTE ANY CLASS    YES ROLE DBA, SYS
SCHEDULER_ADMIN            EXECUTE ANY PROGRAM  YES ROLE DBA, SYS
APP_USER                   DELETE ANY TABLE     NO  USER 
APP_USER                   INSERT ANY TABLE     NO  USER 
APP_USER                   SELECT ANY TABLE     NO  USER 
APP_USER                   UPDATE ANY TABLE     NO  USER 
NEIL2                      SELECT ANY TABLE     NO  USER 

Please note that this is just the ‘%ANY%’ privileges. You need to be aware of users with other very high level privileges such as ALTER DATABASE, ALTER SYSTEM, AUDIT SYSTEM, ALTER USER, DROP USER and DROP PUBLIC SYNONYM. Hint: a good starting point is to remove the ‘dsp.privilege LIKE ‘%ANY%’‘ predicate and look to see what permissions SYS has granted to it. Privileges are added and removed with every Oracle release and you need to understand what they are doing.

Understand who has access to what, and if that access is justified. If not, determine how to reduce the access safely, without compromising your system availability whilst tightening your security.

Next we will talk about Observability. Things change. You need to be watching, regularly, to catch the change and ensure it’s valid.

Fundamental Security Part Three – Unused and Historic Accounts

June 2023 – Oracle 19C

So in the previous post we talked about implementing password complexity. Complex passwords are fine, but what when you experience change. For example, someone leaves the company to become a professional skydiver. Do you remove their database accounts? Are you even informed by HR that someone with database access has left?

If you have implemented user profiles appropriately, the account – left unused – will lock eventually, which is a good start. Pretty much every organization I investigate has old accounts hanging around in every database. You need to identify these accounts and remove them.

Fortunately, within recent versions of Oracle (12C+), account access is recorded in LAST_LOGIN column in the DBA_USERS view.

Users with SYSDBA/SYSBACKUP/etc privs do not update DBA_USERS (well, the underlying table SYS.USER$) as they must be able to connect to a database which is not available. They should update V$PWFILE_USERS.LAST_LOGIN instead of DBA_USERS, but as of my last test on 19.15, they do not.

If we run a query against DBA_USERS, it can be instructive to see what it going on. Reviewing the ACCOUNT_STATUS, CREATED data and LAST_LOGIN, we should be able to make some preliminary decisions about which accounts should continue to exist, and which accounts should be deleted.

DBA_USERS

SELECT username
      ,oracle_maintained
      ,account_status status
      ,created
      ,nvl(last_login,'never') last_login
  FROM dba_users ORDER BY 2, 1;

USERNAME   O STATUS  CREATED    LAST_LOGIN  my comments
APP_SCHEMA N OPEN    2019-04-18 2022-01-01  <- a schema owner
APP_USER   N OPEN    2019-04-18 2023-06-15  <- the main application user
BKP_USR    N OPEN    2019-04-18             <- no last login. WHY?
MARTIN     N OPEN    2020-11-16 2021-04-20  <- He retired
NEIL       N OPEN    2021-11-15 2023-06-15  <- me!
NIALL      N OPEN    2019-10-02 2020-12-12  <- off skydiving?
SCOTT      N LOCKED  2019-11-15 never       <- should this test schema exist?
TIM        N OPEN    2019-11-17 never       <- old and unused! Delete!
.
AUDSYS     Y LOCKED  2019-04-17 never
CTXSYS     Y LOCKED  2019-04-17 never
.
SYSRAC     Y LOCKED  2019-04-17 never
SYSTEM     Y OPEN    2019-04-17 2021-11-16  <- Do we use this? Should it be locked?
WMSYS      Y LOCKED  2019-04-17 never
XDB        Y LOCKED  2019-04-17 never
XS$NULL    Y LOCKED  2019-04-17 never

Take note of the ORACLE_MAINTAINED flag. You should not delete any accounts with this set to "Y", except under the supervision of Oracle Support, as this may cause system instability or even break the database.

Lets look at V$PWFILE_USERS too

$ sqlplus bkp_usr/oracle as sysbackup
Connected.
SQL > show user
USER is "SYSBACKUP"


SELECT username
      ,sysdba
      ,sysoper
      ,sysasm
      ,sysbackup
      ,sysdg
      ,syskm
      ,account_status status
      ,last_login
--    ,lock_date
--    ,expiry_date
--    ,authentication_type
--    ,common
      ,con_id
FROM  V$PWFILE_USERS
order by 1
/

USERNAME   SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM STATUS  LAST_LOGIN         CON_ID
BKP_USR    FALSE FALSE FALSE TRUE  FALSE FALSE OPEN                            3           
SYS        TRUE  TRUE  FALSE FALSE FALSE FALSE OPEN                            0                                           

We can see from the above that BKP_USR is not registering logins as a sys privileged accounts (sysbackup), although it is used daily… We can see that it is indeed SYSBACKUP privs, but there’s no entry in LAST_LOGIN in either view.

We need to periodically manually review accounts in the database for anomalies and confirm that they are all correct. Create a regular report for this – there’s a good chance Security may like a copy.


Of course, there is an alternative to managing user populations in every database individually. From 18C, Oracle introduced Centrally Managed UsersCMU. This allows a database to directly authenticate accounts against Microsoft AD, without using an intermediary service such as OID or Oracle Enterprise User Security. I would recommend using MOS Document: 2462012.1 as a starting point, and initially integrating administrative users before opening up to the wider user population. Maybe not for schemas, though?

Next, we will check who has what permissions, and if there are any security surprises in there: Fundamental Security Part Four – Permissions Checking

Fundamental Security Part Two – Password Complexity and Defaults

June 2023 – Oracle 19C

So in the previous post we saw the creation of a PROFILE to add controls on user passwords, but we did not specify password complexity. By default, this is set to NULL in the profile, meaning no password complexity is enforced. This is a bad practice and must be changed.

PROFILE          RESOURCE_NAME             LIMIT
---------------- ------------------------- ---------
DEFAULT          PASSWORD_VERIFY_FUNCTION  NULL

Oracle provides a number of built-in password verification functions to set minimum length, complexity and the amount of character change required when entering a new password. Their details are as follows (Note that in 19C, VERIFY_FUNCTION_11G  and ORA12C_VERIFY_FUNCTION  are deprecated but still available)

Function NameLengthUpper Lower Numeric Special Differs-By
ORA12C_STIG_VERIFY_FUNCTION1511118
ORA12C_STRONG_VERIFY_FUNCTION922224
ORA12C_VERIFY_FUNCTION 811103
VERIFY_FUNCTION_11G 101103

Of the 4 build-in options, the ORA12C_STIG_VERIFY_FUNCTION is the “best” due to the level of complexity but mainly the length. Passwords need to be long to make them difficult to decrypt or crack. Having a guaranteed mix of upper case, lower case, numeric and special characters mean that there are a lot of possible options for each character position (over 70).

It is possible to write your own password complexity function, and Oracle provide the code for the above options in clear text in the file : $ORACLE_HOME/rdbms/admin/catpvf.sql


Simple example of creating your own custom password verification function based upon the built-in Oracle functions ora_complexity_check and ora_string_distance:


CREATE OR REPLACE FUNCTION custom_password_verify (
    username     VARCHAR2
  , password     VARCHAR2
  , old_password VARCHAR2
) RETURN BOOLEAN IS
    differ  INTEGER;
BEGIN
    IF NOT ora_complexity_check(
                               password
                             , chars => 15
                             , uppercase => 1
                             , lowercase => 1
                             , digit => 1
                             , special => 1
           ) THEN
        RETURN ( false );
    END IF;
    -- Check if the password differs from the previous password by n characters
    IF old_password IS NOT NULL THEN
        differ := ora_string_distance(old_password, password);
        IF differ < 8 THEN
            raise_application_error(-20000, 'password is too similar to previous password');
        END IF;

    END IF;
RETURN ( true );
END;
/

ALTER PROFILE default LIMIT PASSWORD_VERIFY_FUNCTION custom_password_verify;

NOTE: The function ORA_STRING_DISTANCE computes the Levenshtein distance between 2 words, which is the minimum number of single character edits required to change one word into the other.

It can be a good idea to match the complexity required within the Oracle database passwords with the complexity required with your corporate AD or LDAP password validation.

More on using AD or LDAP to authenticate within Oracle in another blog post.

WARNING

The PASSWORD_VERIFY_FUNCTION is a very powerful function which needs to be protected. Given that passwords are passed to it in clear text, and it is possible to subvert the procedure to perform illicit actions (such as keeping a backdoor permanently unlocked or recording everyones password) with the function, care should be taken to review and monitor the function for change and unexpected code.

Default Passwords

There are many standard Oracle accounts in existence, some of which are intrinsic to the running of the system such as SYS and SYSRAC. Some are automatically installed in the database, such as OUTLN or ORDDATA, and some are are simply well know, such as SCOTT or HR. These accounts have default passwords, which are commonly known to every hacker. Oracle provides as view which will display any accounts with a default password, called: DBA_USERS_WITH_DEFPWD

Installing a vanilla Oracle19.3 database and running a select yields the following results:


SQL > SELECT * FROM dba_users_with_defpwd;

USERNAME                       PRODUCT
------------------------------ ------------------------------
SYS
SYSTEM
CTXSYS

SQL > conn CTXSYS/CTXSYS
ERROR:
ORA-28000: The account is locked.

As we can see, the system believed SYS and SYSTEM have the default passwords, and CTXSYS definitely does have the default password! Fortunately, it is also locked by default, but it might be a good idea to change the password immediately.

However, SYS and SYSTEM force you to change the password on installation. There’s a minor bug which means the SHA1 hash which is used to identify default passwords is not updated at installation time. It is a good idea to re-change the SYS and SYSTEM password after install to update this and demonstrate to any audit or compliance officer running the above SELECT that the password is, indeed, not set to the default.

For more details about the underlying mechanisms which are used to allow this view to work, please check out Pete Finnegans blog post here

Now, lets look at old users…

Fundamental Security Part One – User Profiles

June 2023 – Oracle 19C

Stage 1 with our Fundamental Oracle Security… are you enforcing password rules with your database users and administrators? Do you have a standard to which you are adhering? Are the users forced to change their passwords regularly? Can they change it back again, maybe to something simple? What happens to the account when someone leaves the organisation?

One of the fundamental issues with Oracle is that it provides a lot of tools to enforce user compliance to standards, and a lot of it is turned off or has default settings which don’t comply with best practice. A good place to start is to use the Centre for Internet Security standards – CIS – to which many organisations are audited. It’s not perfect, but it’s a good place to approach to ensure a reasonable level of compliance and security.

Oracle provides USER PROFILES to help to manage user accounts. There is a DEFAULT profile, to which all users are allocated in the absence of other controls. However, the defaults are fairly loose and do not provide a great deal of protection when measured against the CIS standards.


DBA_PROFILES

Lets have a look at the defaults.

SELECT
    profile
  , resource_name
  , limit
FROM
    dba_profiles
WHERE
    resource_type = 'PASSWORD' and PROFILE = 'DEFAULT'
ORDER BY
    profile
  , resource_type
  , resource_name;


PROFILE  RESOURCE_NAME             LIMIT     CIS Recommendation
-------- ------------------------- --------- -----------------------------------
DEFAULT  FAILED_LOGIN_ATTEMPTS     10        FAIL <=   5
DEFAULT  INACTIVE_ACCOUNT_TIME     UNLIMITED FAIL <= 120 days(then lock)
DEFAULT  PASSWORD_GRACE_TIME       7         FAIL <=   5 days
DEFAULT  PASSWORD_LIFE_TIME        180       FAIL <=  90 days(enforced chg)
DEFAULT  PASSWORD_LOCK_TIME        1         PASS >=   1 day (duration locked)
DEFAULT  PASSWORD_REUSE_MAX        UNLIMITED FAIL >=  20     (pwd hist num)
DEFAULT  PASSWORD_REUSE_TIME       UNLIMITED FAIL >= 365 days(pwd hist days)
DEFAULT  PASSWORD_ROLLOVER_TIME    -1
DEFAULT  PASSWORD_VERIFY_FUNCTION  NULL      FAIL >= Password Complexity

As we can see, the DEFAULT profile assigned to users is far from meeting the CIS benchmark. What do these options mean?

Resource NameDescription
FAILED_LOGIN_ATTEMPTSHow many times can I get my password wrong before the account locks?
PASSWORD_LOCK_TIMEHow long should the password be locked following FAILED_LOGIN_ATTEMPTS ?
INACTIVE_ACCOUNT_TIMEHow long should I wait before locking an unused account automatically?
PASSWORD_GRACE_TIMEWhen a password expires, for how long should I give the user a warning message and still let them into the database?
PASSWORD_LIFE_TIME How long before the password needs to be changes?
PASSWORD_REUSE_MAXHow many historic passwords will I remember to prevent password reuse?
PASSWORD_REUSE_TIMEHow long will I remember passwords to prevent password reuse?
PASSWORD_ROLLOVER_TIMENew Feature from 19.12 – allow an account to remember and allow the previous password so I don’t need to change every client at exactly the same time. The account has 2 passwords for a while! This is extremely useful for application users where we may have hundreds of client application servers connecting to the database.
PASSWORD_VERIFY_FUNCTIONHow complex should the password be? The default is NULL meaning there is no password complexity enforced! It can be “X”.

How do I enforce better user control?

We need to create one or more profiles which comply with better standards than the defaults supplied. We will probably want more than one profile, as we will generally have more than one type of user – typical examples would be end users directly connecting, application accounts and database administrator accounts.

CREATE PROFILE cis_user_account LIMIT
       FAILED_LOGIN_ATTEMPTS     5
       INACTIVE_ACCOUNT_TIME     60
       PASSWORD_GRACE_TIME       5
       PASSWORD_LIFE_TIME        90
       PASSWORD_LOCK_TIME        1
       PASSWORD_REUSE_MAX        20
       PASSWORD_REUSE_TIME       365
       PASSWORD_ROLLOVER_TIME    0
       PASSWORD_VERIFY_FUNCTION  null];

ALTER USER neil_reporting   PROFILE cis_user_account ;
ALTER USER erik_reporting   PROFILE cis_user_account ;
ALTER USER martin_reporting PROFILE cis_user_account ;


CREATE PROFILE cis_app_account LIMIT
       FAILED_LOGIN_ATTEMPTS     5
       INACTIVE_ACCOUNT_TIME     120
       PASSWORD_GRACE_TIME       5
       PASSWORD_LIFE_TIME        365
       PASSWORD_LOCK_TIME        1
       PASSWORD_REUSE_MAX        20
       PASSWORD_REUSE_TIME       365
       PASSWORD_ROLLOVER_TIME    2
       PASSWORD_VERIFY_FUNCTION  null];

ALTER USER hr_app_user PROFILE cis_app_account ;

You many note that the cis_app_account profile has a longer PASSWORD_LIFE_TIME (365). This is not a good practice but it may be impractical to change the application account credentials on every application server every few months, like a user can. An exemption to the standard might be needed until it is possible to use automated tools such as Ansible and Liquibase to deploy new passwords to the application servers and database. The cis_app_account also has a PASSWORD_ROLLOVER_TIME of 2 days to allow a more controlled way to change the application password without stopping all of the application servers at the same time.

WARNING

Changing the PROFILE of a user may have unexpected consequences. A user who has not connected to the database for longer then the INACTIVE_ACCOUNT_TIME may suddenly lock and need to be unlocked. A user who has not changed their password for longer than PASSWORD_LIFE_TIME may suddenly lock and need to be unlocked. However, this does not happen immediately. There is a background processs which periodically check users against their PROFILE settings and takes action, so the account may lock several hour after you have changed the profile. If that is an application user, you may cause a system outage just when you do not expect it.

WARNING

Users who connect with system privileges, such as SYSDBA or SYSBACKUP have the ability to connect to the database even when the database is not available. As such, they do not update the LAST_LOGIN column in DBA_USERS. This means they are susceptible to being locked out by INACTIVE_ACCOUNT_TIME as the background process uses DBA_USERS.LAST_LOGIN to determine is the account should be locked. At the time of writing (in 19.15) , these accounts do not up the field in V$PWFILE_USERS.LAST_LOGIN either. The SYS account does not lock, so you can always access the database via SYS.

Password Complexity

But what about password complexity? We don’t want users to be able to use “X” as their password. We should all know that long passwords containing a combinations of elements – upper case, lower case, numeric and special characters – are the hardest to decrypt or brute-force. You have left the PASSWORD_VERIFY_FUNCTION as null! That is not compliant with any security standard, so what should we do here? This will take a bit more to explain…

Next, check out: Fundamental Security Part Two – Password Complexity and Defaults

Fundamental Oracle Security

June 2023 – Oracle 19C

Image from https://unsplash.com/@kellysikkema

Frequently I turn up at companies to resolve performance issues and migrate and upgrade systems. However, I have spent a fair bit of time recently working on security, from audit to helping with ransomware issues. The profile of security has increased significantly in recent years and I find that many companies are not even doing the basics. This is a short series of blogs showing the steps that everyone should be taking when commissioning new databases before a drop of code has been deployed. It can also be retrospectively added to existing databases, but that will take longer as there are many opportunities to cause an outage when implementing new security features.

DO NOT THINK BAD THINGS WILL NOT HAPPEN TO YOU!

Given enough time, they will. Make it harder for the hackers and data thieves, especially those within your organization.

The hard part of security is getting everyone to play by the rules. This is especially true of DBA’s, who can generally bypass any implemented security – and frequently do in the name of making their job easier to do. It’s true, minimising the level of security makes your job easier, but also makes the job of hackers easier too. Hackers aren’t all sat in dark rooms on foreign shores saying “Use SQL to corrupt their databases…”. Many data breaches are internal to your organization. Do you have sufficient monitoring and auditing to catch anyone extracting (large quantities of) your data, and the structures in place to prevent them from externalising it from your organization?

What standards are you looking to enforce? There are quite a few out there. You initially don’t want to be too extreme – switching from loose to very locked down in one step is quite dramatic and will alienate users and administrators alike. A good starting point is to take steps toward compliance with the Center for Internet Security (CIS) Benchmarks. You could then look at the STIG requirements (from the US Department of Defense Security Technical Implementation Guide) for further inspiration, or PCI-DSS standards used by – but not limited to – the payment card industry. For many organisations it will be difficult to comply with CIS standards, never mind anything more stringent – but getting close to compliance will make your systems harder to hack and will keep your data more secure. You also need to consider if you are overburdening the teams with excessive rules and processes to access your systems. The more layers and complexity you introduce, the more people will find workarounds to make their lives easier. Find the right balance.


Implementing Security

Oracle provide a lot of security features as part of the basic license, and a few cost options to enhance security and auditing too. This blog series will largely concentrate on what you get bundled with Enterprise Edition. However, you need to implement the features! No point having locks if you don’t use them!


There are many steps to take when securing a database. Click on each title to view the relevant blog post:

Fundamental Security Part One – User Profiles

Fundamental Security Part Two – Password Complexity and Defaults

Fundamental Security Part Three – Unused and Historic Accounts

Fundamental Security Part Four – Permissions Checking

Fundamental Security Part Five – Observability

Fundamental Security Part Six – Network Encryption

Fundamental Security Part Seven – Data Encryption At Rest

Fundamental Security Part Eight – Unified Audit