Fundamental Security Part Eight – Unified Audit

January 2024 – Oracle 19C+

If you are Oracle 12.1 onwards, and you have not explicitly disabled it, you are using Unified Audit

We previously discussed Data Encryption At Rest. Now lets talk about Unified Audit, and why you should be using it


When you create an Oracle database, there is a very small amount of Unified Auditing pre-enabled, and no “traditional” auditing. But why audit? Well, do you know what’s happening in the database? Who is connecting? If anyone is accessing data that they should not? Changing Database Objects? Creating backdoors? What is lurking beneath the surface?

Traditional Oracle Auditing:
It’s deprecated in 21C and desupported in 23C. Stop using it. It’s difficult
to gather all of the data from several diverse locations
in the Database and on the O/S – and much worse if
you have multiple RAC clusters! Convert to Unified Audit.
See MOS: 2909718.1 for a converter

Unified Auditing

For some information about enabling Unified Audit in Pure mode and other bits, check this blog post

Unified Auditing uses Policies to decide what to audit. You can enable and disable these if you have appropriate AUDIT_ADMIN privileges. Typically there are 3 enabled by 23C (the exact enablement varies by release and installed options).

SQL> select * from audit_unified_enabled_policies;

POLICY_NAME ENABLED_OPT ENTITY_NAME ENTITY SUC FAI
ORA_LOGON_FAILURES BY USER ALL USERS USER NO YES
ORA_SECURECONFIG BY USER ALL USERS USER YES YES
ORA$DICTIONARY_SENS_COL_ACCESS BY USER ALL USERS USER YES YES <-23C
ORA_DV_DEFAULT_PROTECTION BY USER ALL USERS USER YES YES <- if DV
ORA_DV_SCHEMA_CHANGES BY USER ALL USERS USER YES YES <- if DV

ORA_LOGON_FAILURES – detects failed logins
ORA_SECURECONFIG – audits high-level DBA type actions, such as ALTER SYSTEM and CREATE USER
ORA$DICTIONARY_SENS_COL_ACCESS – audits access to columns containing password hashes and sensitive data
ORA_DV_DEFAULT_PROTECTION – Oracle Data Vault Only
ORA_DV_SCHEMA_CHANGES – Oracle Data Vault Only

There are a further 10 built-in policies (in 23.3)

SELECT DISTINCT policy_name FROM audit_unified_policies;

POLICY_NAME
ORA$DICTIONARY_SENS_COL_ACCESS
ORA_ACCOUNT_MGMT
ORA_ALL_TOPLEVEL_ACTIONS
ORA_CIS_RECOMMENDATIONS
ORA_DATABASE_PARAMETER
etc...

Their precise function can be determined by querying view AUDIT_UNIFIED_POLICIES. The two most notable are ORA_CIS_RECOMMENDATIONS and ORA_STIG_RECOMMENDATIONS. Enabling these two policies, and creating a policy to audit database LOGINS and LOGOUTS (in 23.3, enable ORA_LOGON_LOGOFF) and you will be reasonably covered for basic audit as recommended by the Centre for Internet Security (CIS) and Security Technical Implementation Guide (STIG) – the basic compliance for the US Department of Defense (sic)

SELECT policy_name,audit_option,nvl(object_name,'N/A') object_name,audit_option_type
FROM audit_unified_policies
WHERE policy_name = 'ORA_CIS_RECOMMENDATIONS'
ORDER BY 1,2;

POLICY_NAME AUDIT_OPTION OBJECT_NAM AUDIT_OPTION_TYPE
ORA_CIS_RECOMMENDATIONS ALTER DATABASE LINK NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER FUNCTION NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PACKAGE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PACKAGE BODY NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PROCEDURE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER PROFILE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER ROLE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER SYNONYM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER SYSTEM NONE SYSTEM PRIVILEGE
ORA_CIS_RECOMMENDATIONS ALTER SYSTEM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER TRIGGER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS ALTER USER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE DATABASE LINK NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE FUNCTION NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PACKAGE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PACKAGE BODY NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PROCEDURE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE PROFILE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE ROLE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE SYNONYM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE TRIGGER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS CREATE USER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP DATABASE LINK NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP FUNCTION NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PACKAGE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PACKAGE BODY NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PROCEDURE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP PROFILE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP ROLE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP SYNONYM NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP TRIGGER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS DROP USER NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS GRANT NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS REVOKE NONE STANDARD ACTION
ORA_CIS_RECOMMENDATIONS SELECT ANY DICTIONARY NONE SYSTEM PRIVILEGE

To create your own policy is straightforward. Here’s two I recommend creating:

neil_logins: this will capture any connection to the database. As well as identifying all connections (you really want to know who is in the database!), it can also be very helpful when troubleshooting connectivity issues.

neil_any_tab_privs: is anyone using the ANY privilege to access your data? Surely they should be accessing data via explicitly granted privilege. Unless this is a DBA doing some ad-hoc performance tuning or data validation/reconciliation (ideally with an assigned ticket!), you shouldn’t see any records for this policy.

CREATE AUDIT POLICY neil_logons
ACTIONS
LOGON, LOGOFF
CONTAINER=CURRENT;

CREATE AUDIT POLICY neil_any_tab_privs
PRIVILEGES
select any table, read any table, insert any table, delete any table, update any table
CONTAINER=CURRENT;

-- and enable them
audit policy neil_logons;
audit policy neil_any_tab_privs;

Housekeeping

Now you’re gathering audit. Depending upon the nature of the activity in your database, you could have a LOT of data. You really need to ensure you are managing it correctly – starting with housekeeping. First of all, it’s worth putting your audit data into its own tablespace. You also need a steer how much of it to keep within the database for easy access, and ensure it’s easy to tidy up. You also need to consider the overspill area – if you cannot write to the DB (e.g. tablespace full, database in startup, read-only standby), you cannot write the audit. In this case the audit gets written to O/S files in “$ORACLE_BASE/audit/[instance-name]/[PDB-GUID]”, which need loading into the DB to tidy them up too:

Set the partition frequency to daily - each day gets a new partition. This doesn't speed up deletes, but it does make the cleanup tidier. The partition column is EVENT_TIMESTAMP (i.e. don't use EVENT_TIMESTAMP_UTC for your queries):

begin
DBMS_AUDIT_MGMT.ALTER_PARTITION_INTERVAL
 (INTERVAL_NUMBER   => 1,
  INTERVAL_FREQUENCY => 'DAY');
end;
/
Move the creation of new partitions to a dedicated tablespace:

begin
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION
 (AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED,
  AUDIT_TRAIL_LOCATION => 'audit_tablespace');
end;
/
Create a DBMS_SCHEDULER job to tidy up:

BEGIN
dbms_scheduler.create_job(q'[AUDIT_HOUSEKEEPING]',
job_type=>'PLSQL_BLOCK', job_action=>
q'[
BEGIN
-- you need to set retention before calling cleanup
dbms_audit_mgmt.set_last_archive_timestamp
   (audit_trail_type       => dbms_audit_mgmt.audit_trail_unified
   ,last_archive_time     => trunc(systimestamp - INTERVAL '3' MONTH));

-- and cleanup
 dbms_audit_mgmt.clean_audit_trail
  (audit_trail_type       => dbms_audit_mgmt.audit_trail_unified
  ,use_last_arch_timestamp => true);

  -- and while we are at it, load overspill files into the table
  DBMS_AUDIT_MGMT.LOAD_UNIFIED_AUDIT_FILES;

END;
]'
,number_of_arguments=>0
,start_date=>trunc(systimestamp + interval '1' day)
,repeat_interval=> q'[FREQ = DAILY; INTERVAL = 1]'
,end_date=>NULL
,job_class=>q'[SCHED$_LOG_ON_ERRORS_CLASS]'
,enabled=>FALSE
,auto_drop=>FALSE
,comments=> q'[Daily clean-up Unified Audit older than 3 months]'
);
COMMIT;
dbms_scheduler.enable(q'[AUDIT_HOUSEKEEPING]');
END;
/

We’re auditing and we are keeping things tidy. Remember to treat each PDB as a separate database for auditing purposes, and audit the CDB too!

The main thing left to do it actually monitor the audit to see if anything is happening that you don’t expect, such are anyone using SELECT ANY TABLE when they should not be, or modifying objects when there’s no approved change. How you do that is very much site specific and up to you when querying UNIFIED_AUDIT_TRAIL.

Addendum

The thing is, audit isn’t just for the auditors. It can be great for troubleshooting or just trying to work out what is going on in a system.

User can’t connect? Are they in the audit (ACTION_NAME=’LOGON’, maybe with a failed attempt)? If they are then they are getting through, past the listener, to the correct database and you have a RETURN_CODE telling you what the problem is (1005 – they didn’t enter a password, 1017 – wrong password, etc).

What to know exactly what’s going on from a 3rd party or ‘black box’ app? Audit everything (not in Prod!), run it, go have a look. You can see every SQL, every procedure call, every object instantiated or removed, just like this fella



Well that’s the end of this mini series on security – I hope you have implemented EVERYTHING on this list. And remember, this is just the starting point! This is the very minimum you need to be doing. Consider your data and what should and should not be easily available. Think about ensuring the audit is held apart from the database (e.g. through loading into Splunk) to prevent DBA’s from changing it or removing it. Consider exactly what you need to audit AND how to query it to spot anomalies and alert on them in real time. And remember to protect Development every bit as much as Production – hackers love getting into a less secure dev db to work out the DB structures and minimise the time they need to spend in Prod when they finally get there… Good Luck!

Fundamental Security Part Seven– Data Encryption At Rest

October 2023 – Oracle 19C+

Photo by Sajad Nori on Unsplash

What is Data Encryption At Rest and why should we use it?

We have looked at Data Encryption in Transit, encrypting your network traffic. Everyone should be doing this. But what about Encrypting Data At Rest? Data that is stored permanently (on your hard drives).


The simplest way to do this, from a database/DBA perspective, is to use an encrypted storage device. Most SAN/NAS devices come with encryption built-in, and encryption of the storage presented to servers is abstracted away to the SAN. The CPU overhead is handled by the SAN and you don’t need to worry about it. However, if you are using Exadata or any Oracle Cloud Service, this is not an option. You will need to implement Oracle Transparent Data Encryption. This is mandatory with the Oracle Cloud. Some cloud providers provide a Storage-Level encryption option (which is frequently an added cost and is usually not the default!)

Why Do This?

There are 3 reasons to implement Encryption At Rest

1. In case the hardware gets stolen. This may not just be directly from your data centre, but following the decommissioning or break/replace of the hardware too.
2. Bypassing database controls to read or amend the data in your database directly from the files.
3. Because the auditors insist upon it (see points 1 and 2)


Storage-Level Encryption [SLE] v Oracle Transparent Data Encryption [TDE]

Neither Storage-Level Encryption [SLE] nor TDE means your data is safe. The processes running against your data need to see it unencrypted. If the processes can do this, so can suitably privileged Operating System administrators.

No matter what security you have put within your databases to prevent unauthorised access to your data, if your data is unencrypted (or encrypted using SLE where it is presented to the server “unencrypted”), you can run a “dd” (in Linux, other O/S have similar commands) and view the data directly. Don’t think for a moment that because it is in an Oracle internal format that it is difficult to read.

> create table read_me (c1 varchar2(1000)) tablespace users
;
Table created.

> insert into read_me values ('I CAN SEE YOU!');
1 row created.

> commit;
Commit complete.

select segment_name,block_id,blocks from dba_extents where segment_name = 'READ_ME';

SEGMENT_NAME                     BLOCK_ID     BLOCKS
------------------------------ ---------- ----------
READ_ME                               136          8

$ dd if=./users01.dbf bs=8192 skip=136 count=8 | od -xa --endian=big

*
0157740    0000    0000    0000    0000    0000    2c01    010e    4920
        nul nul nul nul nul nul nul nul nul nul   , soh soh  so   I  sp
0157760    4341    4e20    5345    4520    594f    5521    0106    7004
          C   A   N  sp   S   E   E  sp   Y   O   U   ! soh ack   p eot
0160000    06a2    0000    8f00    c003    6f04    4200    0000    0204
        ack   " nul nul  si nul   @ etx   o eot   B nul nul nul stx eot
0160020    73e2    0000    0100    0000    4426    0100    6f04    4200
          s   b nul nul soh nul nul nul   D   & soh nul   o eot   B nul
0160040    0080    0000    0200    3200    8800    c003    0000    0000
        nul nul nul nul stx nul   2 nul  bs nul   @ etx nul nul nul nul
0160060    0000    0000    0000    0000    0000    0000    0000    0000
        nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul
*

Encrypting your data on the storage layer DOES NOT PREVENT this sort of access or potential attack.

Oracle Transparent Data Encryption [TDE]

First of all, Oracle Transparent Data Encryption is a COST OPTION. It is not part of the standard Oracle license.

TDE encrypts the actual Oracle data files. Running a “dd” against an encrypted tablespace reveals no usable information. However, you now need to handle all of the keys to be able to open your databases. Key management is the most important aspect of using TDE, ensuring they are available and are not compromised. If you lose the keys, your data is unrecoverable. Remember to backup the keys at the same time (but not with!) as the database as you will need the relevant key should you need to recover the database.

Important note: Most companies who implement TDE do it at the tablespace level. This means the data on disk is encrypted and the CPU overhead is generally low. However, data in memory is NOT encrypted. In Linux, as memory is effectively treated as a file by the O/S, it is possible to read whatever is in your buffer cache using the “dd” method. If you are very skilled (hackers are!), this is also possible from the dom0 layer of the hypervisor, which is accessible by the staff of the cloud provider. Only data encrypted at a column level remains encrypted within the buffer cache. The overhead for accessing column-level encrypted data is high.

It is debatable whether data in the server memory is “at rest”. If you have extremely valuable data, such as credit card information, this should be encrypted by the application (for example by adhering to PCI DSS standards).

For a basic implementation of TDE, I would recommend looking at Tim’s article here (rather than me repeat his excellent website) or one from an Oracle PM here.

When I have time, I’ll do a follow-on article about using a HSM or OKV to manage your keys.


Now I would like to talk about Fundamental Security Part Eight – Unified Audit

Fundamental Security Part Six– Network Encryption

August 2023 – Oracle 19C

We have considered observability. Knowing where you may have a security problem. We now need to look at removing some of the main problems associated with Oracle databases. Lets start with Network Security.

By default, all data travelling over the network with Oracle is unencrypted. This means that anyone with something like Wireshark can inspect packets and read your data as it travels between the client/application and the database. This is not secure. You are going to want to encrypt this data.

There are 2 options when it comes to encryption; implement Transport Layer Security (TLS) using certificates or use Oracle Native Network Encryption and Integrity (note: these used to be part of the Advanced Security Option but is now free with all supported Oracle versions from Oracle 10 onwards)


Transport Layer Security (TLS)

This involves generating and signing a certificate, and putting it into an Oracle Wallet on the database server, and also including the same certificate on each Oracle client or application server needing to connect to the database. You then need to do the same on each client and add the certificate to the database server wallet. This can provide full standard TLS encryption, but at a cost of certificate management. Certificates must expire, and many companies require certificates to be signed by a Certificate Authority, which should impose a maximum lifespan of 1 year on the certificate. When the certificates expire, they need to be replaced. It is possible to link directly to a CA to assist with this, depending upon your setup, but that is beyond the scope of this blog post. Maybe later…

It’s a good idea to use certificate management software to perform this function, otherwise you probably have a never-ending task “like painting the Forth Bridge“.

WARNING! One of the commonest causes of system
outage in the IT industry is an expired certificate

For an example of implementing TLS, please check-out this blog post by Tim Hall

Note that TLS will require you to make both server-side and client-side changes (to every client which connects to the database).

To check if your session is running with encrypted TCPS (rather than TCP), you can issue the following command:

SELECT sys_context('USERENV', 'NETWORK_PROTOCOL') as network_protocol FROM dual;

NETWORK_PROTOCOL
----------------
tcps

As the following Twitter conversation shows, there are alternate ways to configure TLS. You could just generate your own signed certificate on each box, exchange it and hope nobody compromises it. You could just use a single signed and trusted root cert from your CA. It can be simpler, but there are inevitably trade-offs involved.


Oracle Native Network Encryption and Integrity (ONNEI or ANO)

The first thing you should look to do if you are not encrypting network traffic is to implement Oracle Native Network Encryption and Integrity (formerly Advanced Networking Option – ANO). Largely because it’s so simple, so easy to implement and will also show where you have older clients which need to be upgraded.

So what does the name mean? Encryption and Integrity. When you encrypt data you need good Encryption. You also need to ensure nobody has changed the data in transit. Integrity (or checksums). The industry standard algorithms at the moment are AES256 for encryption and SHA384 for crypto-checksumming.

Configuring ONNEI

One of the advantages of using the native encryption is that you can enable this server-side only. You may not need to make any changes at all to the clients, if the client or (JDBC) drivers are sufficiently up to date. Setup in REQUESTED mode to start with, so any non-compliant clients will still connect unencrypted.

Edit the sqlnet.ora file on the tns listener home on the database server(s) and add:

SQLNET.ENCRYPTION_SERVER = REQUESTED
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256)
SQLNET.CRYPTO_CHECKSUM_SERVER = REQUESTED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA384)

NOTE: the sqlnet.ora file is re-read dynamically every time there is a new database connection. Changing the file will immediately cause new connections to encrypt

Monitor connections to ensure they are encrypting:

SELECT sys_context('USERENV', 'NETWORK_PROTOCOL') as network_protocol FROM dual;

NETWORK_PROTOCOL
----------------
tcp

Ah! We are not using TLS so we aren’t encrypting as TCPS. The encryption is Oracle Native. The packets are not fully encrypted. Only the contents (your data!) are subject to encryption. We therefore need another method.

Looking at GV$SESSION_CONNECT_INFO, we can see 2 lines which would not normally be present, showing AES256 and SHA384 encryption. The SQL below restricts the output to your session. Remove the WHERE clause and you can see the encryption for every session. (You can sort-of use this view to see if the connection is using TCPS as the line “TCP/IP NT Protocol Adapter for Linux:…..” will not exist.)

Note that we are using REQUESTED and not REQUIRED at this stage, so some connections may have only 3 lines of output, some may have 5. Also, if you have note specified the level of encryption required, this is also reflected in the output (e.g. in 19C, the default is AES256 but the weak algorithm for checksumming, SHA1)

ONNEI

SELECT sys_context('USERENV', 'NETWORK_PROTOCOL') as network_protocol FROM dual;

NETWORK_PROTOCOL
----------------
tcp

SELECT network_service_banner FROM gv$session_connect_info 
      WHERE sid IN (SELECT DISTINCT sid FROM v$mystat) ORDER BY 1;

NETWORK_SERVICE_BANNER
----------------------------------------------------------------------------
AES256 Encryption service adapter for Linux: Version 19.0.0.0.0 - Production
Crypto-checksumming service for Linux: Version 19.0.0.0.0 - Production
Encryption service for Linux: Version 19.0.0.0.0 - Production
SHA384 Crypto-checksumming service adapter for Linux: Version 19.0.0.0.0 - P
TCP/IP NT Protocol Adapter for Linux: Version 19.0.0.0.0 - Production

TLS

SELECT sys_context('USERENV', 'NETWORK_PROTOCOL') as network_protocol FROM dual;

NETWORK_PROTOCOL
----------------
tcps

SELECT network_service_banner FROM gv$session_connect_info
WHERE sid IN (SELECT DISTINCT sid FROM v$mystat) ORDER BY 1;

NETWORK_SERVICE_BANNER
----------------------
Crypto-checksumming service for Linux: Version 19.0.0.0.0 - Production
Encryption service for Linux: Version 19.0.0.0.0 - Production

(if you are using both ONNEI and TLS, you may also get the SHA384 line)
One you are happy that all connections are encrypting successfully, change from REQUESTED to REQUIRED. This will reject any connection attempts which cannot encrypt.
SQLNET.ENCRYPTION_SERVER = REQUIRED
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256)
SQLNET.CRYPTO_CHECKSUM_SERVER = REQUIRED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA384)

All done. 1 or 2 changes to sqlnet.ora and you’re enforcing encryption over the network.

You can also confirm the protocol (tcp v tcps) used if you are auditing successful logons (you should be!)

SELECT dbusername,authentication_type FROM unified_audit_trail;

dbu  authentication_type
---  -------------------
TIM  (TYPE=(DATABASE));(CLIENT ADDRESS=((ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.194)(PORT=63849))));
NEIL (TYPE=(DATABASE));(CLIENT ADDRESS=((ADDRESS=(PROTOCOL=tcps)(HOST=192.168.56.193)(PORT=44317))));

Performance Impact

All encryption comes with a CPU requirement. You will burn more CPU on the client and server encrypting and decrypting traffic. You also add a few milliseconds latency at connect time for the handshake (although for most databases systems using stable connection pools, this should not be noticeable)

Extra CPU is usually in the 1%-5% range for Oracle DB network encryption, although you do need to test as it will vary across applications. There is minimal difference between using TLS and ONNEI, although the TLSv1.3 handshake has been performance optimised over TLSv1.2, I have not re-tested for any performance difference at this point and if you are using connection pooling there will be no noticeable difference.

Conclusion – what should I use?

Both. One of the important things to remember about ONNEI is that it doesn’t use TLS, so it’s not using an “industry standard” protocol. Your security team may insist upon having TLSv1.2 or 1.3. However, it is encrypting/checksumming to the same standard so there is very little to choose between the 2 methods.

If you use pre-exchanged TLS certificates associated only with each server, you do have the advantage that you cannot fall foul of a man-in-the-middle attack. This is possible, although difficult, with ONNEI. Simple eavesdropping will not be possible.

However, you don’t need to choose to get started. You can set the sqlnet.ora parameter:

IGNORE_ANO_ENCRYPTION_FOR_TCPS=TRUE

This allows ONNEI to be used if the connection if not TLS encrypted, but ignores the ONNEI config for already encrypted connections.

So, setup ONNEI, monitor it for old client connections, then consider if you need to explicitly encrypt using TLS for some or all of your connections with the overhead that entails.

Next lets talk about Data Encryption At Rest

Fundamental Security Part Five – Observability

July 2023 – Oracle 19C

Now you know what permissions have been granted in your system and to whom, as shown in the previous post. Maybe you have remediated them down to least privilege. Now, how do you ensure the system stays like that? How do you know if you have succeeded in stripping the permissions and users back successfully?

We are going to need to monitor the system and see if anything changes.

Options

You are going to need a consistent and repeatable report and/or dashboard to show you the current state of your estate. There are a number of options, each with pros and cons. Lets talk about DBSAT, Oracle Data Safe, and briefly; Splunk, and doing it yourself.

DBSAT

A good starting point is to use DBSAT. This is a free, semi-supported tool supplied by Oracle which you can run against each database [note: for multitenant, you need to do this against the CDB and every PDB individually] to get a report showing where you have/failed to comply with CIS or STIG rules. The home page is here: https://www.oracle.com/uk/database/technologies/security/dbsat.html [MOS: 2138254.1]

DBSAT can be used in a number of different ways. The simple way is to generate the output from each database and then convert it into a HTML document which you can analyse. It will report pass and failure states, plus indicate where you need to perform more analysis on items as there is no clear “correct” way to implement some elements of security. For example, DBMS_SQL is regarded as a potentially insecure package as badly coded input can allow SQL Injection attacks. As a result, this will be flagged for analysis rather than be an outright failure.

If you have a database called UTF8, with a PDB called UTF8PDB1, you will need to run the collect twice. Once for the CDB and once for each PDB. I am using “/ as sysdba” but you may want to have a dedicated account if you are doing this regularly. I am using “-n” so no password is required for the zip. Given the extremely sensitive nature of the output, which would be very useful to hackers, you may want to ensure the output is password protected.

For the CDB (called UTF8):
$ export ORACLE_SID=UTF8
$ unset ORACLE_PDB_SID
$ ./dbsat collect -n “/ as sysdba” UTF8
this produces a file “UTF8.json”

$ ./dbsat report -a -n UTF8
this produces output based upon the “UTF8.json file”

For the PDB (called UTF8PDB1):
$ export ORACLE_SID=UTF8
$ export ORACLE_PDB_SID=UTF8PDB1
$ ./dbsat collect -n “/ as sysdba” UTF8PDB1
$ ./dbsat report -a -n UTF8PDB1

Typical output

$ ls -1 UTF8PDB1*
UTF8PDB1.json
UTF8PDB1_report.html
UTF8PDB1_report.json
UTF8PDB1_report.txt
UTF8PDB1_report.xlsx

CDB Output Header (HTML version) – note the Container identifying this as a CDB.

PDB Output header (txt version). Note the Container showing the PDB name and ID.

We can see that there’s a high risk on the User Accounts. Looking at that section of the report tells us the problem

We can also see a number with the status of Advisory and Evaluate, which need to be manually read, understood and (maybe) resolved. Changing the below Advisory without due diligence may cause your application to stop functioning as it may be using an older client. However, it indicates you need to upgrade the client and secure the database further:

There are two of DBSAT Companion utilities which are OK, but could be more useful:

dbsat_diff, which shows the difference between 2 report files. This should be useful for monitoring change, but as it shows the different in run times and ages it kind-of falls down a bit. Here there’s no fundamental change to CRYPT.TDE, but it’s still reported.

$ ./dbsat_diff UTF8PDB1-run1_report.json UTF8PDB1-run2_report.json

[this bit is useful showing that the SCOTT user has been fixed]
USER.DEFPWD: Users with Default Passwords
< Status: High Risk
< Summary:
<     Found 1 unlocked user account with default password.
< Details:
<     Users with default password: SCOTT
---
> Status: Pass
> Summary:
>     No unlocked user accounts are using default password.


[this is less useful - as nothing has changed except the run date...]
CRYPT.TDE: Transparent Data Encryption
< Status: Evaluate
< Summary:
<     Found 2 encrypted tablespaces. Found 7 unencrypted tablespaces. Found 1
<         encrypted column. Examined 1 initialization parameter.
< Details:
<     Unencrypted tablespaces: DATA, GONE, INDEXES, SYSAUX, SYSTEM, TEMP,
<         UNDOTBS1
<     Encrypted tablespaces: ENC_TS (AES256), USERS (AES256)
<
<     Encrypted columns: Column C2 of NEIL.T_ENC (AES 192 bits key)
<
<     ENCRYPT_NEW_TABLESPACES = CLOUD_ONLY. Recommended value is ALWAYS for
<         databases running on-premises.
<     It has been 471 days since master encryption key was last rotated.
---
> Status: Evaluate
> Summary:
>     Found 2 encrypted tablespaces. Found 7 unencrypted tablespaces. Found 1
>         encrypted column. Examined 1 initialization parameter.
> Details:
>     Unencrypted tablespaces: DATA, GONE, INDEXES, SYSAUX, SYSTEM, TEMP,
>         UNDOTBS1
>     Encrypted tablespaces: ENC_TS (AES256), USERS (AES256)
>
>     Encrypted columns: Column C2 of NEIL.T_ENC (AES 192 bits key)
>
>     ENCRYPT_NEW_TABLESPACES = CLOUD_ONLY. Recommended value is ALWAYS for
>         databases running on-premises.
>     It has been 472 days since master encryption key was last rotated.

dbsat_extract, which is helpful looking for specific sections and pulling only specified output.

Extracting 2 sections. Adding -v would give the Details as well as the summary.

$ ./dbsat_extract -i CRYPT.TDE -i USER.DEFPWD UTF8PDB1-2_report.json
=== UTF8PDB1-2_report.json: UTF8 UTF8PDB1 (PDB:3) Wed May 10 2023 19:20:49 UTC+01:00

CRYPT.TDE: Transparent Data Encryption
| Status: Evaluate
| Summary:
|     Found 2 encrypted tablespaces. Found 7 unencrypted tablespaces. Found 1
|         encrypted column. Examined 1 initialization parameter.

USER.DEFPWD: Users with Default Passwords
| Status: Pass
| Summary:
|     No unlocked user accounts are using default password.

DATA SAFE

Oracle offer a (relatively cheap) cloud product to purchase called Data Safe. It is basically the same as DBSAT but has an automated dashboard which will allow you so see exactly how you are performing against CIS/STIG benchmarks with minimal internal effort. It doesn’t fix anything, but is a easier implementation route. Note that it is a cloud product, so if you intend to use it monitor on-premises databases you will need to provide a route from Oracle Cloud Infrastructure to your database to allow it to be monitored

SPLUNK

Another alternative would be to use something like Splunk/Grafana, but feed it the JSON output from the DBSAT collection to allow you to have your own dashboard/graphing showing risk and advisory without having to live with the extended output from DBSAT.

DIY

If you are a bit old-school and very much like creating this sort of thing yourself, its just a matter of understanding the SQL’s needed to run to collect the information in whatever format you like. The SQL code ran by DBSAT is held within a file called “sat_collector.sql”, which is a very good place to start. There is also SQL within the CIS Benchmarks, which can help. There are advantages to this route, as it will also allow application and company specific information to be gathered (e.g. if you are extracting audit records regularly, ensure that the audit extraction account is actually connecting and extracting them!) However, you will need to maintain the code yourself, as security standards change and databases are upgraded.

OK, so much for monitoring and identifying issues, what about fixing a few of them. Lets start with: Fundamental Security Part Six – Network Encryption

Other Considerations

However, I suspect your system is monitoring more than just Oracle. There is a reasonable chance that you will be monitoring other databases systems, such as Postgres, SQL Server, MySQL, MongoDB and a whole host more. You need to consider the operating system for most database implementations too. You need to decide what the appropriate monitoring solution is for such a configuration, as the solutions above are Oracle-centric. Now, the best way to monitor Oracle overall is using Oracle Enterprise Manager. It has far more rules and knowledge about Oracle than any other monitoring solution, but does not present a security dashboard. There is a reasonable chance that you will want a “single pane of glass” solution to present to management detailing the overall security configuration across your entire estate. As such, a tailored solution using Splunk and Grafana might be appropriate. You may also want to consider 3rd party solutions such as DataDog or dedicated security software like Tenable Nessus. All 3rd party software will present a challenge, having to deal with the ingestion of disparate data formats and coalescing it into a single dashboard. It can be challenging but knowing when configuration changes and vulnerabilities appear is critical to helping secure your infrastructure.