SAN Migration and moving files with ASM

Here’s a quick post to help with LUN, SAN or File migration when using Oracle ASM.

Please note that this is just an example method, and is not a definitive guide.
You need to ensure your approach is appropriate for your environment.

Did you know that you can add and drop disks on a diskgroup in a single command.
This saves Oracle from doing 2 REBAL actions and speeds things up considerably using a lot fewer resources.

sqlplus / as sysasm;

> ALTER DISKGROUP data 
2   ADD  DISK 'ORCL:disk101','ORCL:disk102' 
3   DROP DISK 'disk042'     ,'disk043' 
4   REBALANCE POWER n;

Diskgroup altered.

You can monitor the REBAL operation:

select * from v$asm_operation;

GROUP_NUMBER OPERA PASS      STAT POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ERROR_CODE CON_ID
------------ ----- --------- ---- ----- ------ ----- -------- -------- ----------- ---------- ------
           6 REBAL REBALANCE  RUN     1      1    68    20397     3030           8                 0
           6 REBAL COMPAT    WAIT     1      1     0        0        0           0                 0

If you are just migrating from one SAN to another, without needing to move files between diskgroups, this is a really easy way to achieve it.

From Oracle 12.1 onwards. if you are moving Datafiles from one Diskgroup to a new Diskgroup, that’s straightforward too:

export ORACLE_SID=ORCL
sqlplus  / as sysdba

alter database move datafile '+DATA_DG/ORCL/DATAFILE/system.101.902468275' to '+NEW_DG';

Moving logfiles involves a drop and re-create, but it still fully online.
Take care if using DataGuard that you do not try to drop logfiles before they are archived and applied to the standby:

alter database  add logfile member '+NEW_DG' to group 1;
alter database drop logfile member '+DATA_DG/ORCL/ONLINELOG/group_1.102.902049284';

Moving tempfiles can be tricky. The default gets used really quickly after startup.

You need to add a tempfile and then get all processes to not be using the old tempfile before you can drop it [select * from gv$sort_usage]. Worst-case, this may involve a SHUTDOWN, then a STARTUP RESTRICT to stop processes connecting. Alternatively, create a new default TEMP tablespace and convince processes to use that one.

alter tablespace temp add  tempfile '+NEW_DG';
alter tablespace temp drop tempfile '+DATA_DG/ORCL/TEMPFILE/temp.204.992482742';

Moving controlfiles requires a stop and start of the database. There’s no on-line options here. I tend to do the following:

srvctl stop database -d ORCL

sqlplus / as sysdba
startup nomount
show parameter control_files


rman target /
restore controlfile to '+NEW_DG' from '+DATA_DG/ORCL/CONTROLFILE/current.291.939274434';

sqlplus / as sysdba
alter system set control_files='+NEW_DG/ORCL/CONTROLFILE/current.992.346677889'
                              ,'+FRA_DG/ORCL/CONTROLFILE/current.???.?????????'
                               scope=spfile sid='*';
shutdown

srvctl start database -d ORCL

To move your spfile in ASM is a 2-stage process which will automatically update your database config in grid control:

sqlplus / as sysdba
create  pfile='/tmp/ORCL.ora' from spfile;
create spfile='+NEW_DG'       from pfile='/tmp/ORCL.ora';

Moving a password file in ASM is straightforward too:
NOTE: you are not allowed to move OMF files names and you must copy files via their alias.

as grid:
asmcmd
cp +DATA_DG/ORCL/orapworcl +NEW_DG/ORCL/orapworclexit
as oracle:
srvctl modify database -d ORCL -pwfile +NEW_DG/ORCL/orapworclsrvctl config database -d ORCL | grep Password

If you are using Block Change Tracking, you may need to move that file too:

alter database disable block change tracking;
alter database enable block change tracking using file '+NEW_DG';

Finally, if you’re dropping disk groups don’t forget to modify the disk group dependencies in Grid Infra so your DB isn’t dependent upon groups you have now removed:

srvctl config datbase -d ORCL | grep Group
srvctl modify database -d ORCL -diskgroup 'NEW_DG,FRA_DG,other_DG'

I hope you found this quick guide useful.

Advertisement

Oracle on 4096 (4k) sector disks don’t work (ish)

I recently came across 4K (4096 byte) sector drives. They are a fairly new thing and have come about so drives can exceed the 2TB limit imposed by having 512byte sectors. The details behind this can be found here, in much greater detail than I need to understand.

What I do understand is that Oracle doesn’t deal with 4K sectors (4Kn) very well and it shows up in a couple of ways. Don’t get me wrong, from Oracle 11.2, 4Kn database are supported, albeit with some features. Here’s 2 of them:
 
1. ACFS doesn’t like 4K sectors. There’s some fudging around identifying physical v logical 4k sectors but you need to check out the asm parameter “ORACLEASM_USE_LOGICAL_BLOCK_SIZE” to see if you can get it to work for you.
 
2. I was installing 12.1.0.2.0 Grid Infrastructure – pretty recent I hear you all say! That only came out in July 2014. One important aspect of 12.1.0.2.0 is that the management database was migrated from a being Berkely DB to an oracle single instance CDB with a single PDB. It’s called “-MGMTDB”. (this was optional prior to 12.1.0.2)

However, when installing 12.1.0.2.0 Grid Infrastructure, when it got to the bit at the end, after it’s all kind-of fully installed, it creates the -MGMTDB, and if you have a 4K Sector disks in ASM, it fails rather cryptically:

I have highlighted the key line in red. This isn’t obviously the problem, but it is the cause.

CRS-2674: Start of 'ora.mgmtdb' on 'server01' failed
[Thread-102] [ 2015-07-01 15:35:29.079 BST ] [HADatabaseUtils.start:1240]
Error starting mgmt database in local node, PRCR-1013 : Failed to start resource ora.mgmtdb
PRCR-1064 : Failed to start resource ora.mgmtdb on node server01
CRS-5017: The resource action "ora.mgmtdb start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA_DG/_mgmtdb/spfile-MGMTDB.ora'
ORA-17503: ksfdopn:2 Failed to open file +DATA_DG/_mgmtdb/spfile-MGMTDB.ora
ORA-15056: additional error message
ORA-17503: ksfdopn:2 Failed to open file +DATA_DG/_mgmtdb/spfile-mgmtdb.ora
ORA-15173: entry 'spfile-mgmtdb.ora' does not exist in directory '_mgmtdb'
ORA-06512: at line 4
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/server01/crs/trace/crsd_oraagent_grid.trc".
KJHA:2phase clscrs_flag:840 instSid:
KJHA:2phase ctx 2 clscrs_flag:840 instSid:-MGMTDB
KJHA:2phase clscrs_flag:840 dbname:
KJHA:2phase ctx 2 clscrs_flag:840 dbname:_mgmtdb
KJHA:2phase WARNING!!! Instance:-MGMTDB of kspins type:1 does not support 2 phase CRS

The fundamental problem is that, if you have a 4K sector and are using ASM, having your SPFILE in ASM doesn’t work. This was spotted in 11.2.0.3 (Doc: 16870214.8) but wasn’t fixed in 11.2.0.4.0 (it’s fixed by 11.2.0.4.6, possibly earlier) and it’s not fixed in 12.1.0.2.0 base release. Which mean the -MGMTDB will always fail to create. Itis fixed by patch set 3 12.1.0.2.3 (path 20485724)

However, you’ve then got a broken -MGMTDB, which you’ll need to recreate: [Doc ID 1589394.1]

## Stop and disable ora.crf resource.
## On each node, as root user:
crsctl stop res ora.crf -init
crsctl modify res ora.crf -attr ENABLED=0 -init
## Issue the DBCA command to delete the management database
## As Grid User, locate the node that the Management Database is running by executing:

/u01/app/grid/12.2.0.2/bin/srvctl status mgmtdb
## rebuild mgmt
## Set the GI HOME
export GI_HOME=/u01/app/grid/12.1
## As Grid User on any node execute the following DBCA command with the desired <DG Name>:
dbca -silent -createDatabase -sid -MGMTDB -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -gdbName _mgmtdb -storageType ASM -diskGroupName DATA01 -datafileJarLocation $GI_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
## Create a PDB within the MGMTDB using DBCA.
## As Grid User on any node execute the following DBCA command:
## NOTE: The CLUSTER_NAME needs to have any hyphens (“-“) replaced with underscores (“_”)

dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName **MY_CLUSTER_NAME_HERE** -createPDBFrom RMANBACKUP -PDBBackUpfile $GI_HOME/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile $GI_HOME/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true –internalSkipGIHomeCheck
## Secure that the Management Database credential:
## As Grid User, confirm the node on which MGMTDB is running by executing.
$GI_HOME/bin/srvctl status MGMTDB
Database is enabled
 Instance -MGMTDB is running on node <NODE_NAME>
 On <NODE_NAME>:>
## and secure on that node
$GI_HOME/bin/mgmtca
## Enable and start ora.crf resource.
## On each node, as root user:

$GI_HOME/bin/crsctl modify res ora.crf -attr ENABLED=1 -init
$GI_HOME/bin/crsctl start res ora.crf -init

Good luck. And don’t use 4K sector sizes. It probably means your spindles are to big anyway. If “disk is cheap”, why do they have to keep buying such large capacity spindles with such low IOPS-per-GB for such huge quantities of money?