SAN Migration and moving files with ASM

Here’s a quick post to help with LUN, SAN or File migration when using Oracle ASM.

Please note that this is just an example method, and is not a definitive guide.
You need to ensure your approach is appropriate for your environment.

Did you know that you can add and drop disks on a diskgroup in a single command.
This saves Oracle from doing 2 REBAL actions and speeds things up considerably using a lot fewer resources.

sqlplus / as sysasm;

> ALTER DISKGROUP data 
2   ADD  DISK 'ORCL:disk101','ORCL:disk102' 
3   DROP DISK 'disk042'     ,'disk043' 
4   REBALANCE POWER n;

Diskgroup altered.

You can monitor the REBAL operation:

select * from v$asm_operation;

GROUP_NUMBER OPERA PASS      STAT POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ERROR_CODE CON_ID
------------ ----- --------- ---- ----- ------ ----- -------- -------- ----------- ---------- ------
           6 REBAL REBALANCE  RUN     1      1    68    20397     3030           8                 0
           6 REBAL COMPAT    WAIT     1      1     0        0        0           0                 0

If you are just migrating from one SAN to another, without needing to move files between diskgroups, this is a really easy way to achieve it.

From Oracle 12.1 onwards. if you are moving Datafiles from one Diskgroup to a new Diskgroup, that’s straightforward too:

export ORACLE_SID=ORCL
sqlplus  / as sysdba

alter database move datafile '+DATA_DG/ORCL/DATAFILE/system.101.902468275' to '+NEW_DG';

Moving logfiles involves a drop and re-create, but it still fully online.
Take care if using DataGuard that you do not try to drop logfiles before they are archived and applied to the standby:

alter database  add logfile member '+NEW_DG' to group 1;
alter database drop logfile member '+DATA_DG/ORCL/ONLINELOG/group_1.102.902049284';

Moving tempfiles can be tricky. The default gets used really quickly after startup.

You need to add a tempfile and then get all processes to not be using the old tempfile before you can drop it [select * from gv$sort_usage]. Worst-case, this may involve a SHUTDOWN, then a STARTUP RESTRICT to stop processes connecting. Alternatively, create a new default TEMP tablespace and convince processes to use that one.

alter tablespace temp add  tempfile '+NEW_DG';
alter tablespace temp drop tempfile '+DATA_DG/ORCL/TEMPFILE/temp.204.992482742';

Moving controlfiles requires a stop and start of the database. There’s no on-line options here. I tend to do the following:

srvctl stop database -d ORCL

sqlplus / as sysdba
startup nomount
show parameter control_files


rman target /
restore controlfile to '+NEW_DG' from '+DATA_DG/ORCL/CONTROLFILE/current.291.939274434';

sqlplus / as sysdba
alter system set control_files='+NEW_DG/ORCL/CONTROLFILE/current.992.346677889'
                              ,'+FRA_DG/ORCL/CONTROLFILE/current.???.?????????'
                               scope=spfile sid='*';
shutdown

srvctl start database -d ORCL

To move your spfile in ASM is a 2-stage process which will automatically update your database config in grid control:

sqlplus / as sysdba
create  pfile='/tmp/ORCL.ora' from spfile;
create spfile='+NEW_DG'       from pfile='/tmp/ORCL.ora';

Moving a password file in ASM is straightforward too:
NOTE: you are not allowed to move OMF files names and you must copy files via their alias.

as grid:
asmcmd
cp +DATA_DG/ORCL/orapworcl +NEW_DG/ORCL/orapworclexit
as oracle:
srvctl modify database -d ORCL -pwfile +NEW_DG/ORCL/orapworclsrvctl config database -d ORCL | grep Password

If you are using Block Change Tracking, you may need to move that file too:

alter database disable block change tracking;
alter database enable block change tracking using file '+NEW_DG';

Finally, if you’re dropping disk groups don’t forget to modify the disk group dependencies in Grid Infra so your DB isn’t dependent upon groups you have now removed:

srvctl config datbase -d ORCL | grep Group
srvctl modify database -d ORCL -diskgroup 'NEW_DG,FRA_DG,other_DG'

I hope you found this quick guide useful.

Advertisement

Adding a DEFAULT column in 12C

I was at a talk recently, and there was an update by Jason Arneil about adding columns to tables with DEFAULT values in Oracle 12C. The NOT NULL restriction has been lifted and now Oracle cleverly intercepts the null value and replaces it with the DEFAULT meta-data without storing it in the table. To repeat the 11G experiment I ran recently:

 

SQL> alter table ncha.tab1 add (filler_default char(1000) default 'EXPAND' not null);
Table altered.

SQL> select table_name,num_rows,blocks,avg_space,avg_row_len 
      from user_tables where table_name = 'TAB1';
TABLE_NAME NUM_ROWS       BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1            10000       1504          0        2017


In both releases we then issue:
SQL> alter table ncha.tab1 modify (filler_default null);
Table altered.


IN 11G
SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
      from user_tables where table_name = 'TAB1';

TABLE_NAME NUM_ROWS       BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1            10000       3394          0        2017

BUT IN 12C
SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
      from user_tables where table_name = 'TAB1';
TABLE_NAME NUM_ROWS       BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1            10000       1504          0        2017

So, as we can see, making the column NULLABLE in 12C didn’t cause it to go through and update every row in the way it must in 11G. It’s still a chained-row update accident waiting to happen, but its a more flexible accident 🙂

However, I think it’s worth pointing out that you only get “free data storage” when you add the column. When inserting a record, simply having a column with a DEFAULT value means that the DEFAULT gets physically stored with the record if it is not specified. The meta-data effect is ONLY for subsequently added columns with DEFAULT values.

SQL> create table ncha.tab1 (pk number, c2 timestamp, filler char(1000), filler2 char(1000) DEFAULT 'FILLER2' NOT NULL) pctfree 1;
Table created.

SQL> alter table ncha.tab1 add constraint tab1_pk primary key (pk);
Table altered.

Insert 10,000 rows into the table, but not into FILLER2 with the DEFAULT
SQL> insert into ncha.tab1 (pk, c2, filler) select rownum id, sysdate, 'A' from dual connect by level <= 10000;
commit;
Commit complete.

Gather some stats and have a look after loading the table. Check for chained rows at the same time.
SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100);
PL/SQL procedure successfully completed.

SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
     from user_tables where table_name = 'TAB1';

TABLE_NAME   NUM_ROWS	  BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1		10000	    3394	  0	   2017

For a bit of fun, I thought I would see just how weird the stats might look if I played around with adding defaults

SQL> drop table ncha.tab1;
Table dropped.

SQL> create table ncha.tab1 (pk number) pctfree 1;
Table created.

SQL> alter table ncha.tab1 add constraint tab1_pk primary key (pk);
Table altered.

Insert 10,000 rows into the table

SQL> insert into ncha.tab1 (pk) select rownum id from dual connect by level <= 10000;
commit;
Commit complete.

Gather some stats and have a look after loading the table. Check for chained rows at the same time.
SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100);

PL/SQL procedure successfully completed.

SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
  2    from user_tables
  3   where table_name = 'TAB1';

TABLE_NAME   NUM_ROWS	  BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1		10000	      20	  0	      4

Now lets add a lot of defaults
SQL> alter table ncha.tab1 add (filler_1 char(2000) default 'F1' not null, filler_2 char(2000) default 'F2' null, filler_3 char(2000) default 'F3', filler_4 char(2000) default 'how big?' null );
Table altered.

Gather some stats and have a look after adding the column. Check for chained rows at the same time.
SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100);

PL/SQL procedure successfully completed.

SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
  2    from user_tables
  3   where table_name = 'TAB1';

TABLE_NAME   NUM_ROWS	  BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1		10000	      20	  0	   8008

10,000 rows with an AVG_ROW_LEN of 8008, all in 20 blocks. Magic!

Just to finish off, lets update each DEFAULT column so the table expands….

SQL> select filler_1, filler_2, filler_3, filler_4,count(*) from ncha.tab1 group by filler_1,filler_2,filler_3,filler_4;

FILLER_1   FILLER_2   FILLER_3	 FILLER_4     COUNT(*)
---------- ---------- ---------- ---------- ----------
F1	   F2	      F3	 how big?	 10000

So it's all there. The metadata is intercepting the nulls and converting them to the default on the fly, rather than storing them in the blocks.
So what happens if we actually UPDATE the table?

SQL> update ncha.tab1 set filler_1 = 'EXPAND', filler_2 = 'EXPAND', filler_3='EXPAND', filler_4='THIS BIG!';
10000 rows updated.

SQL> select filler_1, filler_2, filler_3, filler_4,count(*) from ncha.tab1 group by filler_1,filler_2,filler_3,filler_4;

FILLER_1   FILLER_2   FILLER_3	 FILLER_4     COUNT(*)
---------- ---------- ---------- ---------- ----------
EXPAND	   EXPAND     EXPAND	 THIS BIG!	 10000

Gather some stats and have a look after the update, checking for chained rows at the same time.
SQL> exec dbms_stats.gather_table_stats('NCHA','TAB1',null,100);

PL/SQL procedure successfully completed.

SQL> select table_name,num_rows,blocks,avg_space,avg_row_len
     from user_tables where table_name = 'TAB1';

TABLE_NAME   NUM_ROWS	  BLOCKS  AVG_SPACE AVG_ROW_LEN
---------- ---------- ---------- ---------- -----------
TAB1		10000	   19277	  0	   8010

SQL> 
SQL> analyze table tab1 list chained rows into chained_rows;

Table analyzed.

SQL> select count(*) CHAINED_ROWS from chained_rows;

CHAINED_ROWS
------------
       10000

Yep. That’s bigger.