Adding a DEFAULT column in 12C
01/12/2014 Leave a comment
I was at a talk recently, and there was an update by Jason Arneil about adding columns to tables with DEFAULT values in Oracle 12C. The NOT NULL restriction has been lifted and now Oracle cleverly intercepts the null value and replaces it with the DEFAULT meta-data without storing it in the table. To repeat the 11G experiment I ran recently:
SQL> alter table ncha.tab1 add (filler_default char(1000) default 'EXPAND' not null); Table altered. SQL> select table_name,num_rows,blocks,avg_space,avg_row_len from user_tables where table_name = 'TAB1'; TABLE_NAME NUM_ROWS BLOCKS AVG_SPACE AVG_ROW_LEN ---------- ---------- ---------- ---------- ----------- TAB1 10000 1504 0 2017 In both releases we then issue: SQL> alter table ncha.tab1 modify (filler_default null); Table altered. IN 11G SQL> select table_name,num_rows,blocks,avg_space,avg_row_len from user_tables where table_name = 'TAB1'; TABLE_NAME NUM_ROWS BLOCKS AVG_SPACE AVG_ROW_LEN ---------- ---------- ---------- ---------- ----------- TAB1 10000 3394 0 2017 BUT IN 12C SQL> select table_name,num_rows,blocks,avg_space,avg_row_len from user_tables where table_name = 'TAB1'; TABLE_NAME NUM_ROWS BLOCKS AVG_SPACE AVG_ROW_LEN ---------- ---------- ---------- ---------- ----------- TAB1 10000 1504 0 2017
So, as we can see, making the column NULLABLE in 12C didn’t cause it to go through and update every row in the way it must in 11G. It’s still a chained-row update accident waiting to happen, but its a more flexible accident 🙂
However, I think it’s worth pointing out that you only get “free data storage” when you add the column. When inserting a record, simply having a column with a DEFAULT value means that the DEFAULT gets physically stored with the record if it is not specified. The meta-data effect is ONLY for subsequently added columns with DEFAULT values.
SQL> create table ncha.tab1 (pk number, c2 timestamp, filler char(1000), filler2 char(1000) DEFAULT 'FILLER2' NOT NULL) pctfree 1;
Table created.
SQL> alter table ncha.tab1 add constraint tab1_pk primary key (pk);
Table altered.
Insert 10,000 rows into the table, but not into FILLER2 with the DEFAULT
SQL> insert into ncha.tab1 (pk, c2, filler) select rownum id, sysdate, 'A' from dual connect by level <= 10000;
commit;
Commit complete.
Gather some stats and have a look after loading the table. Check for chained rows at the same time.
SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100);
PL/SQL procedure successfully completed.
SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
from user_tables where table_name = 'TAB1';
TABLE_NAME NUM_ROWS BLOCKS AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1 10000 3394 0 2017
For a bit of fun, I thought I would see just how weird the stats might look if I played around with adding defaults
SQL> drop table ncha.tab1; Table dropped. SQL> create table ncha.tab1 (pk number) pctfree 1; Table created. SQL> alter table ncha.tab1 add constraint tab1_pk primary key (pk); Table altered. Insert 10,000 rows into the table SQL> insert into ncha.tab1 (pk) select rownum id from dual connect by level <= 10000; commit; Commit complete. Gather some stats and have a look after loading the table. Check for chained rows at the same time. SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100); PL/SQL procedure successfully completed. SQL> select table_name,num_rows,blocks,avg_space,avg_row_len 2 from user_tables 3 where table_name = 'TAB1'; TABLE_NAME NUM_ROWS BLOCKS AVG_SPACE AVG_ROW_LEN ---------- ---------- ---------- ---------- ----------- TAB1 10000 20 0 4 Now lets add a lot of defaults SQL> alter table ncha.tab1 add (filler_1 char(2000) default 'F1' not null, filler_2 char(2000) default 'F2' null, filler_3 char(2000) default 'F3', filler_4 char(2000) default 'how big?' null ); Table altered. Gather some stats and have a look after adding the column. Check for chained rows at the same time. SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100); PL/SQL procedure successfully completed. SQL> select table_name,num_rows,blocks,avg_space,avg_row_len 2 from user_tables 3 where table_name = 'TAB1'; TABLE_NAME NUM_ROWS BLOCKS AVG_SPACE AVG_ROW_LEN ---------- ---------- ---------- ---------- ----------- TAB1 10000 20 0 8008
10,000 rows with an AVG_ROW_LEN of 8008, all in 20 blocks. Magic!
Just to finish off, lets update each DEFAULT column so the table expands….
SQL> select filler_1, filler_2, filler_3, filler_4,count(*) from ncha.tab1 group by filler_1,filler_2,filler_3,filler_4; FILLER_1 FILLER_2 FILLER_3 FILLER_4 COUNT(*) ---------- ---------- ---------- ---------- ---------- F1 F2 F3 how big? 10000 So it's all there. The metadata is intercepting the nulls and converting them to the default on the fly, rather than storing them in the blocks. So what happens if we actually UPDATE the table? SQL> update ncha.tab1 set filler_1 = 'EXPAND', filler_2 = 'EXPAND', filler_3='EXPAND', filler_4='THIS BIG!'; 10000 rows updated. SQL> select filler_1, filler_2, filler_3, filler_4,count(*) from ncha.tab1 group by filler_1,filler_2,filler_3,filler_4; FILLER_1 FILLER_2 FILLER_3 FILLER_4 COUNT(*) ---------- ---------- ---------- ---------- ---------- EXPAND EXPAND EXPAND THIS BIG! 10000 Gather some stats and have a look after the update, checking for chained rows at the same time. SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100); PL/SQL procedure successfully completed. SQL> select table_name,num_rows,blocks,avg_space,avg_row_len from user_tables where table_name = 'TAB1'; TABLE_NAME NUM_ROWS BLOCKS AVG_SPACE AVG_ROW_LEN ---------- ---------- ---------- ---------- ----------- TAB1 10000 19277 0 8010 SQL> SQL> analyze table tab1 list chained rows into chained_rows; Table analyzed. SQL> select count(*) CHAINED_ROWS from chained_rows; CHAINED_ROWS ------------ 10000
Yep. That’s bigger.