Tuesday 11 March 2014

Minimal downtime rolling database upgrade to 12c Release 1


This note describes the procedure used to perform a rolling database upgrade from 11.2.0.3 to Oracle 12c Release 1 using a Data Guard physical standby database and transient logical standby database.
The average time to perform a database upgrade would be in the region of one to two hours and for many organizations even that amount of downtime would in some cases not be possible or would lead to a significant financial implication because of a database outage.
The rolling upgrade procedure greatly reduces the downtime for an upgrade from hours to a few minutes which is the duration in which a database switchover can be performed.
At a high level these are the steps involved in the  rolling upgrade process
  • Start with the 11.2.0.3 Data Guard physical standby database and convert that to a transient logical standby database. Users are still connected to  primary database.
  • Upgrade the transient logical standby database to 12.1.0.1
  • The transient logical standby process uses SQL Apply to take redo generated by a database running a lower Oracle version (11.2.0.3) , and apply the redo to a standby database running on a higher Oracle version (12.1.0.1)
  • Perform a switchover so that the original primary database now becomes a physical standby database
  • Use Redo Apply to synchronize (and upgrade) the original primary database with the new upgraded primary database
  • Perform another switchover to revert the databases to their former roles.

Oracle provides a Bourne shell script (physru) which really does automate a lot of the rolling upgrade process and is available for download from MOS via the note – Database Rolling Upgrade Shell Script (Doc ID 949322.1).
The DBA only has a few tasks to perform as the physru script handles the rolling upgrade process.
  •  Upgrade the standby database using DBUA or manual upgrade.
  • Start the upgraded standby database in the new Oracle 12c home
  • Start the original primary database in the new Oracle 12c home

The physru script accepts six parameters as shown below.
$./physru <sysdba user<primary TNS alias> <physical standby TNS alias> <primary db unique name> <physical standby db unique name> <target version>
We need to provide the SYSDBA password  and can run this from either the primary database server or from the node hosting the standby database as long as SQL*Net connectivity is available from that node to both the databases involved in the rolling upgrade.
We need to execute the script 3 times and let us see what happens at each stage.

First execution
Create control file backups for both the primary and the target physical standby database
Creates Guaranteed Restore Points (GRP) on both the primary database and the physical standby database that can be used to flashback to beginning of the process or any other  intermediate steps along the way.
Converts a physical standby into a transient logical standby database.

Second execution

Use SQL apply to synchronize the transient logical standby database and make it current with the primary
Performs a switchover to the upgraded 12c transient logical standby and  the standby database becomes the primary
Performs a flashback on the original primary database to the initial Guaranteed Restore Point  and converts the original primary into a physical standby

Third execution

Starts Redo Apply on the new physical standby database (the original primary database) to apply all redo that has been generated during the rolling upgrade process, including any SQL statements that have been executed on the transient logical standby as part of the upgrade.
When synchronized, the script offers the option of performing a final switchover to return the databases to their original roles of primary and standby, but now on the new 12c database software version.
Removes all Guaranteed Restore Points

Prerequisites

Data Guard primary and physical standby database environment exists
Flashback database is enabled on both Primary and Standby database
If Data Guard Broker is managing the configuration, then it has to be disabled for the duration of the upgrade process (by setting the initialization parameter DG_BROKER_START=FALSE)
Ensure that the log transport (initialization parameter LOG_ARCHIVE_DEST_nis correctly configured to perform a switchover from the primary database to the target physical standby database and back.
Static entries defined in the listener.ora file on both Primary as well as Standby database nodes for the databases directly involved the rolling upgrade process.
Oracle 12.1.0.1.0 software has already been installed on both the primary as well as standby database servers

Let us now see an example.

In this case the primary database is TESTDB and the physical standby database is TESTDBS,
The DB_UNIQUE_NAME of the primary and standby is also TESTDB and TESTDBS
The original version is 11.2.0,3 and we are upgrading to 12.1.0.1.
We have enabled the Flashback database on both Primary as well as Standby database
Added static entries in the listener.ora on both sites and then reloaded the listener.
For example on the Primary site:

(SID_DESC=
(SID_NAME=testdb)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2)
(GLOBAL_DBNAME=testdb)
)

And on the Standby site:

(SID_DESC=
(SID_NAME=testdb)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2)
(GLOBAL_DBNAME=testdbs)
)
The tnsnames.ora on both Primary as well as Standby sites have entries for TESTDB and TESTDBS.
Important – before starting the operation,  do a tnsping from both sites and ensure that the TNS aliases are being resolved

Stop managed recovery and shutdown the Standby database.
Mount the standby database

Now run physru script – Execution One 
Note – we can run the script from either the Primary or Standby site – but in this example we are running it from the Primary site for all the three executions of the script

[oracle@kens-orasql-001-test ~]$ ./physru SYS testdb testdbs testdb testdbs 12.1.0.1.0
Please enter the sysdba password:
### Initialize script to either start over or resume execution
Jul 31 08:09:06 2013 [0-1] Identifying rdbms software version
Jul 31 08:09:06 2013 [0-1] database testdb is at version 11.2.0.3.0
Jul 31 08:09:06 2013 [0-1] database testdbs is at version 11.2.0.3.0
Jul 31 08:09:07 2013 [0-1] verifying flashback database is enabled at testdb and testdbs
Jul 31 08:09:07 2013 [0-1] verifying available flashback restore points
Jul 31 08:09:07 2013 [0-1] verifying DG Broker is disabled
Jul 31 08:09:07 2013 [0-1] looking up prior execution history
Jul 31 08:09:07 2013 [0-1] purging script execution state from database testdb
Jul 31 08:09:07 2013 [0-1] purging script execution state from database testdbs
Jul 31 08:09:07 2013 [0-1] starting new execution of script

### Stage 1: Backup user environment in case rolling upgrade is aborted
Jul 31 08:09:08 2013 [1-1] stopping media recovery on testdbs
Jul 31 08:09:09 2013 [1-1] creating restore point PRU_0000_0001 on database testdbs
Jul 31 08:09:09 2013 [1-1] backing up current control file on testdbs
Jul 31 08:09:09 2013 [1-1] created backup control file /u01/app/oracle/product/11.2.0/dbhome_2/dbs/PRU_0001_testdbs_f.f
Jul 31 08:09:09 2013 [1-1] creating restore point PRU_0000_0001 on database testdb
Jul 31 08:09:09 2013 [1-1] backing up current control file on testdb
Jul 31 08:09:09 2013 [1-1] created backup control file /u01/app/oracle/product/11.2.0/dbhome_2/dbs/PRU_0001_testdb_f.f

NOTE: Restore point PRU_0000_0001 and backup control file PRU_0001_testdbs_f.f
      can be used to restore testdbs back to its original state as a
      physical standby, in case the rolling upgrade operation needs to be aborted
      prior to the first switchover done in Stage 4.

### Stage 2: Create transient logical standby from existing physical standby
Jul 31 08:12:43 2013 [2-1] verifying RAC is disabled at testdbs
Jul 31 08:12:43 2013 [2-1] verifying database roles
Jul 31 08:12:43 2013 [2-1] verifying physical standby is mounted
Jul 31 08:12:43 2013 [2-1] verifying database protection mode
Jul 31 08:12:43 2013 [2-1] verifying transient logical standby datatype support

WARN: Objects have been identified on the primary database which will not be
      replicated on the transient logical standby.  The complete list of
      objects and their associated unsupported datatypes can be found in the
      dba_logstdby_unsupported view.  For convenience, this script has written
      the contents of this view to a file - physru_unsupported.log.

      Various options exist to deal with these objects such as:
        - disabling applications that modify these objects
        - manually resolving these objects after the upgrade
        - extending support to these objects (see metalink note: 559353.1)

      If you need time to review these options, you should enter 'n' to exit
      the script.  Otherwise, you should enter 'y' to continue with the
      rolling upgrade.

Are you ready to proceed with the rolling upgrade? (y/n): y

Jul 31 08:13:37 2013 [2-1] continuing
Jul 31 08:13:37 2013 [2-2] starting media recovery on testdbs
Jul 31 08:13:43 2013 [2-2] confirming media recovery is running
Jul 31 08:13:45 2013 [2-2] waiting for v$dataguard_stats view to initialize
Jul 31 08:13:51 2013 [2-2] waiting for apply lag on testdbs to fall below 30 seconds
Jul 31 08:13:51 2013 [2-2] apply lag is now less than 30 seconds
Jul 31 08:13:52 2013 [2-2] stopping media recovery on testdbs
Jul 31 08:13:53 2013 [2-2] executing dbms_logstdby.build on database testdb
Jul 31 08:14:00 2013 [2-2] converting physical standby into transient logical standby
Jul 31 08:14:06 2013 [2-3] opening database testdbs
Jul 31 08:14:10 2013 [2-4] configuring transient logical standby parameters for rolling upgrade
Jul 31 08:14:10 2013 [2-4] starting logical standby on database testdbs
Jul 31 08:14:16 2013 [2-4] waiting until logminer dictionary has fully loaded
Jul 31 08:16:28 2013 [2-4] dictionary load 42% complete
Jul 31 08:16:38 2013 [2-4] dictionary load 74% complete
Jul 31 08:16:48 2013 [2-4] dictionary load 75% complete
Jul 31 08:21:30 2013 [2-4] dictionary load is complete
Jul 31 08:21:31 2013 [2-4] waiting for v$dataguard_stats view to initialize
Jul 31 08:21:37 2013 [2-4] waiting for apply lag on testdbs to fall below 30 seconds
Jul 31 08:22:08 2013 [2-4] current apply lag: 265
Jul 31 08:22:38 2013 [2-4] current apply lag: 295
Jul 31 08:23:08 2013 [2-4] current apply lag: 325
Jul 31 08:23:38 2013 [2-4] current apply lag: 355
Jul 31 08:36:40 2013 [2-4] apply lag is now less than 30 seconds

NOTE: Database testdbs is now ready to be upgraded.  This script has left the
      database open in case you want to perform any further tasks before
      upgrading the database.  Once the upgrade is complete, the database must
      opened in READ WRITE mode before this script can be called to resume the
      rolling upgrade.

NOTE: If testdbs was previously a RAC database that was disabled, it may be
      reverted back to a RAC database upon completion of the rdbms upgrade.
      This can be accomplished by performing the following steps:

          1) On instance testdb, set the cluster_database parameter to TRUE.
          eg: SQL> alter system set cluster_database=true scope=spfile;

          2) Shutdown instance testdb.
          eg: SQL> shutdown abort;

          3) Startup and open all instances for database testdbs.
          eg: srvctl start database -d testdbs

If we connect to the standby database we can now see that the role has been changed from PHYSICAL STANDBY to LOGICAL STANDBY
SQL> select database_role from v$database;

DATABASE_ROLE
----------------
LOGICAL STANDBY

Now start the 12c database upgrade on the standby database. Have a look a this post which will discuss the < href="http://gavinsoorma.com/2013/07/12c-database-upgrade-11-2-0-3-to-12-1-0-1-upgrade-using-dbua/">upgrade to 12.1.01.0 using DBUA .
Note that users are still connected to the primary database and it is business as usual
Make some changes to the Primary database while the standby database upgrade is in progress.
SQL> update customers set cust_city=’Dubai’ where rownum < 10001;
10000 rows updated.
SQL> commit;
Commit complete.
SQL> create table mycustomers as select * from customers;
Table created.

After the 12c upgrade is completed, we need to update the static entry we made in the listener.ora providing the location of the 12c database Oracle Home and then reload the listener.
For example this is the change we made in the listener.ora
SID_LIST_LISTENER12C =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = testdbs)
(ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
(SID_NAME = testdb)
)
)

After the upgrade we now connect to the transient Logical Standby database which is now running in 12c and run the ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE command. Ensure the database is running in READ WRITE mode.
[oracle@kens-orasql-001-dev admin]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Thu Aug 1 07:12:41 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

Database altered.

SQL>

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

Now run the physru script again - Execution Two
[oracle@kens-orasql-001-test ~]$ ./physru SYS testdb testdbs testdb testdbs 12.1.0.1.0
Please enter the sysdba password:
### Initialize script to either start over or resume execution
Aug 01 08:06:03 2013 [0-1] Identifying rdbms software version
Aug 01 08:06:03 2013 [0-1] database testdb is at version 11.2.0.3.0
Aug 01 08:06:03 2013 [0-1] database testdbs is at version 12.1.0.1.0
Aug 01 08:06:04 2013 [0-1] verifying flashback database is enabled at testdb and testdbs
Aug 01 08:06:04 2013 [0-1] verifying available flashback restore points
Aug 01 08:06:04 2013 [0-1] verifying DG Broker is disabled
Aug 01 08:06:05 2013 [0-1] looking up prior execution history
Aug 01 08:06:05 2013 [0-1] last completed stage [2-4] using script version 0001
Aug 01 08:06:05 2013 [0-1] resuming execution of script

### Stage 3: Validate upgraded transient logical standby
Aug 01 08:06:05 2013 [3-1] database testdbs is no longer in OPEN MIGRATE mode
Aug 01 08:06:05 2013 [3-1] database testdbs is at version 12.1.0.1.0

### Stage 4: Switch the transient logical standby to be the new primary
Aug 01 08:06:06 2013 [4-1] waiting for testdbs to catch up (this could take a while)
Aug 01 08:06:07 2013 [4-1] waiting for v$dataguard_stats view to initialize
Aug 01 08:06:07 2013 [4-1] waiting for apply lag on testdbs to fall below 30 seconds
Aug 01 08:06:07 2013 [4-1] apply lag is now less than 30 seconds
Aug 01 08:06:07 2013 [4-2] switching testdb to become a logical standby
Aug 01 08:06:13 2013 [4-2] testdb is now a logical standby
Aug 01 08:06:13 2013 [4-3] waiting for standby testdbs to process end-of-redo from primary
Aug 01 08:06:14 2013 [4-4] switching testdbs to become the new primary
Aug 01 08:06:18 2013 [4-4] testdbs is now the new primary

### Stage 5: Flashback former primary to pre-upgrade restore point and convert to physical
Aug 01 08:06:19 2013 [5-1] shutting down database testdb
Aug 01 08:06:28 2013 [5-1] mounting database testdb
Aug 01 08:06:34 2013 [5-2] flashing back database testdb to restore point PRU_0000_0001
Aug 01 08:06:37 2013 [5-3] converting testdb into physical standby
Aug 01 08:06:39 2013 [5-4] shutting down database testdb

NOTE: Database testdb has been shutdown, and is now ready to be started
      using the newer version Oracle binary.  This script requires the
      database to be mounted (on all active instances, if RAC) before calling
      this script to resume the rolling upgrade.

The transient logical standby database has now been converted to a data guard Primary database
SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PRIMARY

We now prepare the Original Primary database for the upgrade to 12c. Application is now running from the Standby site.
Change static listener.ora entry to point to 12c Oracle Home ( or we can create a new 12c listener in addition to the 11g one) and then reload the listener
(SID_DESC=
(SID_NAME=testdb)
(ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
(GLOBAL_DBNAME=testdb)
)
Copy spfile, init.ora , password file for TESTDB from 11g Oracle Home to 12c Oracle Home.
Copy the tnsnames.ora file from 11g $ORACLE_HOME/network/admin to 12c $ORACLE_HOME/network/admin
Change /etc/oratab entry for TESTDB to point to new Oracle 12c home
Mount the TESTDB database (now standby database) from the new Oracle 12c
Connect to both databases TESTDB and TESTDBS and ensure that the values for the parameters ‘log_archive_dest_state_1′ and ‘log_archive_dest_state_2′ are both set to ENABLE

Now the third and final execution of the physru script!
Application still connected to TESTDBS and database changes are being performed
SQL> update mycustomers set cust_city=’Timbuktu’;
55500 rows updated.
SQL> commit;
Commit complete.
[oracle@kens-orasql-001-test ~]$ ./physru SYS testdb testdbs testdb testdbs 12.1.0.1.0
Please enter the sysdba password:
### Initialize script to either start over or resume execution
Aug 01 08:34:04 2013 [0-1] Identifying rdbms software version
Aug 01 08:34:04 2013 [0-1] database testdb is at version 12.1.0.1.0
Aug 01 08:34:04 2013 [0-1] database testdbs is at version 12.1.0.1.0
Aug 01 08:34:05 2013 [0-1] verifying flashback database is enabled at testdb and testdbs
Aug 01 08:34:06 2013 [0-1] verifying available flashback restore points
Aug 01 08:34:06 2013 [0-1] verifying DG Broker is disabled
Aug 01 08:34:06 2013 [0-1] looking up prior execution history
Aug 01 08:34:07 2013 [0-1] last completed stage [5-4] using script version 0001
Aug 01 08:34:07 2013 [0-1] resuming execution of script

### Stage 6: Run media recovery through upgrade redo
Aug 01 08:34:08 2013 [6-1] upgrade redo region identified as scn range [1306630, 3089888]
Aug 01 08:34:08 2013 [6-1] starting media recovery on testdb
Aug 01 08:34:14 2013 [6-1] confirming media recovery is running
Aug 01 08:34:15 2013 [6-1] waiting for media recovery to initialize v$recovery_progress
Aug 01 08:42:49 2013 [6-1] monitoring media recovery's progress
Aug 01 08:42:49 2013 [6-2] last applied scn 1295902 is approaching upgrade redo start scn 1306630
Aug 01 08:47:23 2013 [6-3] recovery of upgrade redo at 01% - estimated complete at Aug 01 12:17:43
Aug 01 08:47:39 2013 [6-3] recovery of upgrade redo at 03% - estimated complete at Aug 01 10:49:51
Aug 01 08:47:54 2013 [6-3] recovery of upgrade redo at 04% - estimated complete at Aug 01 10:28:12
Aug 01 08:48:09 2013 [6-3] recovery of upgrade redo at 08% - estimated complete at Aug 01 09:46:20
Aug 01 08:48:24 2013 [6-3] recovery of upgrade redo at 11% - estimated complete at Aug 01 09:29:23
Aug 01 08:48:40 2013 [6-3] recovery of upgrade redo at 13% - estimated complete at Aug 01 09:27:03
Aug 01 08:49:10 2013 [6-3] recovery of upgrade redo at 15% - estimated complete at Aug 01 09:23:47
Aug 01 08:49:26 2013 [6-3] recovery of upgrade redo at 18% - estimated complete at Aug 01 09:18:34
Aug 01 08:49:41 2013 [6-3] recovery of upgrade redo at 21% - estimated complete at Aug 01 09:14:18
Aug 01 08:49:56 2013 [6-3] recovery of upgrade redo at 23% - estimated complete at Aug 01 09:13:24
Aug 01 08:50:11 2013 [6-3] recovery of upgrade redo at 24% - estimated complete at Aug 01 09:12:35
Aug 01 08:50:27 2013 [6-3] recovery of upgrade redo at 26% - estimated complete at Aug 01 09:11:22
Aug 01 08:50:42 2013 [6-3] recovery of upgrade redo at 30% - estimated complete at Aug 01 09:09:03
Aug 01 08:50:57 2013 [6-3] recovery of upgrade redo at 32% - estimated complete at Aug 01 09:07:51
Aug 01 08:51:12 2013 [6-3] recovery of upgrade redo at 36% - estimated complete at Aug 01 09:06:05
Aug 01 08:51:28 2013 [6-3] recovery of upgrade redo at 40% - estimated complete at Aug 01 09:04:32
Aug 01 08:51:43 2013 [6-3] recovery of upgrade redo at 41% - estimated complete at Aug 01 09:04:37
Aug 01 08:51:58 2013 [6-3] recovery of upgrade redo at 43% - estimated complete at Aug 01 09:03:58
Aug 01 08:52:14 2013 [6-3] recovery of upgrade redo at 44% - estimated complete at Aug 01 09:03:58
Aug 01 08:52:29 2013 [6-3] recovery of upgrade redo at 47% - estimated complete at Aug 01 09:03:15
Aug 01 08:52:44 2013 [6-3] recovery of upgrade redo at 50% - estimated complete at Aug 01 09:02:46
Aug 01 08:53:00 2013 [6-3] recovery of upgrade redo at 55% - estimated complete at Aug 01 09:01:05
Aug 01 08:53:15 2013 [6-3] recovery of upgrade redo at 75% - estimated complete at Aug 01 08:56:40
Aug 01 08:53:30 2013 [6-3] recovery of upgrade redo at 79% - estimated complete at Aug 01 08:56:19
Aug 01 08:53:45 2013 [6-3] recovery of upgrade redo at 82% - estimated complete at Aug 01 08:56:11
Aug 01 08:54:01 2013 [6-3] recovery of upgrade redo at 84% - estimated complete at Aug 01 08:56:09
Aug 01 08:54:16 2013 [6-3] recovery of upgrade redo at 86% - estimated complete at Aug 01 08:56:06
Aug 01 08:54:31 2013 [6-3] recovery of upgrade redo at 88% - estimated complete at Aug 01 08:56:03
Aug 01 08:54:46 2013 [6-3] recovery of upgrade redo at 90% - estimated complete at Aug 01 08:56:04
Aug 01 08:55:02 2013 [6-4] media recovery has finished recovering through upgrade

[oracle@kens-orasql-001-test ~]$ ./physru SYS testdb testdbs testdb testdbs 12.1.0.1.0
Please enter the sysdba password:
### Initialize script to either start over or resume execution
Aug 01 08:34:04 2013 [0-1] Identifying rdbms software version
Aug 01 08:34:04 2013 [0-1] database testdb is at version 12.1.0.1.0
Aug 01 08:34:04 2013 [0-1] database testdbs is at version 12.1.0.1.0
Aug 01 08:34:05 2013 [0-1] verifying flashback database is enabled at testdb and testdbs
Aug 01 08:34:06 2013 [0-1] verifying available flashback restore points
Aug 01 08:34:06 2013 [0-1] verifying DG Broker is disabled
Aug 01 08:34:06 2013 [0-1] looking up prior execution history
Aug 01 08:34:07 2013 [0-1] last completed stage [5-4] using script version 0001
Aug 01 08:34:07 2013 [0-1] resuming execution of script

### Stage 6: Run media recovery through upgrade redo
Aug 01 08:34:08 2013 [6-1] upgrade redo region identified as scn range [1306630, 3089888]
Aug 01 08:34:08 2013 [6-1] starting media recovery on testdb
Aug 01 08:34:14 2013 [6-1] confirming media recovery is running
Aug 01 08:34:15 2013 [6-1] waiting for media recovery to initialize v$recovery_progress
Aug 01 08:42:49 2013 [6-1] monitoring media recovery's progress
Aug 01 08:42:49 2013 [6-2] last applied scn 1295902 is approaching upgrade redo start scn 1306630
Aug 01 08:47:23 2013 [6-3] recovery of upgrade redo at 01% - estimated complete at Aug 01 12:17:43
Aug 01 08:47:39 2013 [6-3] recovery of upgrade redo at 03% - estimated complete at Aug 01 10:49:51
Aug 01 08:47:54 2013 [6-3] recovery of upgrade redo at 04% - estimated complete at Aug 01 10:28:12
Aug 01 08:48:09 2013 [6-3] recovery of upgrade redo at 08% - estimated complete at Aug 01 09:46:20
Aug 01 08:48:24 2013 [6-3] recovery of upgrade redo at 11% - estimated complete at Aug 01 09:29:23
Aug 01 08:48:40 2013 [6-3] recovery of upgrade redo at 13% - estimated complete at Aug 01 09:27:03
Aug 01 08:49:10 2013 [6-3] recovery of upgrade redo at 15% - estimated complete at Aug 01 09:23:47
Aug 01 08:49:26 2013 [6-3] recovery of upgrade redo at 18% - estimated complete at Aug 01 09:18:34
Aug 01 08:49:41 2013 [6-3] recovery of upgrade redo at 21% - estimated complete at Aug 01 09:14:18
Aug 01 08:49:56 2013 [6-3] recovery of upgrade redo at 23% - estimated complete at Aug 01 09:13:24
Aug 01 08:50:11 2013 [6-3] recovery of upgrade redo at 24% - estimated complete at Aug 01 09:12:35
Aug 01 08:50:27 2013 [6-3] recovery of upgrade redo at 26% - estimated complete at Aug 01 09:11:22
Aug 01 08:50:42 2013 [6-3] recovery of upgrade redo at 30% - estimated complete at Aug 01 09:09:03
Aug 01 08:50:57 2013 [6-3] recovery of upgrade redo at 32% - estimated complete at Aug 01 09:07:51
Aug 01 08:51:12 2013 [6-3] recovery of upgrade redo at 36% - estimated complete at Aug 01 09:06:05
Aug 01 08:51:28 2013 [6-3] recovery of upgrade redo at 40% - estimated complete at Aug 01 09:04:32
Aug 01 08:51:43 2013 [6-3] recovery of upgrade redo at 41% - estimated complete at Aug 01 09:04:37
Aug 01 08:51:58 2013 [6-3] recovery of upgrade redo at 43% - estimated complete at Aug 01 09:03:58
Aug 01 08:52:14 2013 [6-3] recovery of upgrade redo at 44% - estimated complete at Aug 01 09:03:58
Aug 01 08:52:29 2013 [6-3] recovery of upgrade redo at 47% - estimated complete at Aug 01 09:03:15
Aug 01 08:52:44 2013 [6-3] recovery of upgrade redo at 50% - estimated complete at Aug 01 09:02:46
Aug 01 08:53:00 2013 [6-3] recovery of upgrade redo at 55% - estimated complete at Aug 01 09:01:05
Aug 01 08:53:15 2013 [6-3] recovery of upgrade redo at 75% - estimated complete at Aug 01 08:56:40
Aug 01 08:53:30 2013 [6-3] recovery of upgrade redo at 79% - estimated complete at Aug 01 08:56:19
Aug 01 08:53:45 2013 [6-3] recovery of upgrade redo at 82% - estimated complete at Aug 01 08:56:11
Aug 01 08:54:01 2013 [6-3] recovery of upgrade redo at 84% - estimated complete at Aug 01 08:56:09
Aug 01 08:54:16 2013 [6-3] recovery of upgrade redo at 86% - estimated complete at Aug 01 08:56:06
Aug 01 08:54:31 2013 [6-3] recovery of upgrade redo at 88% - estimated complete at Aug 01 08:56:03
Aug 01 08:54:46 2013 [6-3] recovery of upgrade redo at 90% - estimated complete at Aug 01 08:56:04
Aug 01 08:55:02 2013 [6-4] media recovery has finished recovering through upgrade
### Stage 7: Switch back to the original roles prior to the rolling upgrade

NOTE: At this point, you have the option to perform a switchover
     which will restore testdb back to a primary database and
     testdbs back to a physical standby database.  If you answer 'n'
     to the question below, testdb will remain a physical standby
     database and testdbs will remain a primary database.

### Stage 7: Switch back to the original roles prior to the rolling upgrade

NOTE: At this point, you have the option to perform a switchover
     which will restore testdb back to a primary database and
     testdbs back to a physical standby database.  If you answer 'n'
     to the question below, testdb will remain a physical standby
     database and testdbs will remain a primary database.

Do you want to perform a switchover? (y/n): y

Aug 01 08:55:42 2013 [7-1] continuing
Aug 01 08:55:44 2013 [7-2] waiting for v$dataguard_stats view to initialize
Aug 01 08:55:44 2013 [7-2] waiting for apply lag on testdb to fall below 30 seconds
Aug 01 08:55:44 2013 [7-2] apply lag is now less than 30 seconds
Aug 01 08:55:45 2013 [7-3] switching testdbs to become a physical standby
Aug 01 08:55:48 2013 [7-3] testdbs is now a physical standby
Aug 01 08:55:48 2013 [7-3] shutting down database testdbs
Aug 01 08:55:49 2013 [7-3] mounting database testdbs
Aug 01 08:55:57 2013 [7-4] waiting for standby testdb to process end-of-redo from primary
Aug 01 08:55:59 2013 [7-5] switching testdb to become the new primary
Aug 01 08:55:59 2013 [7-5] testdb is now the new primary
Aug 01 08:55:59 2013 [7-5] opening database testdb
Aug 01 08:56:05 2013 [7-6] starting media recovery on testdbs
Aug 01 08:56:11 2013 [7-6] confirming media recovery is running

### Stage 8: Statistics
script start time:                                           31-Jul-13 07:20:51
script finish time:                                          01-Aug-13 08:07:09
total script execution time:                                       +01 00:46:18
wait time for user upgrade:                                        +00 23:28:40
active script execution time:                                      +00 01:17:38
transient logical creation start time:                       31-Jul-13 07:25:18
transient logical creation finish time:                      31-Jul-13 07:25:48
primary to logical switchover start time:                    01-Aug-13 07:17:03
logical to primary switchover finish time:                   01-Aug-13 07:17:15
primary services offline for:                                      +00 00:00:12
total time former primary in physical role:                        +00 00:48:56
time to reach upgrade redo:                                        +00 00:00:16
time to recover upgrade redo:                                      +00 00:11:56
primary to physical switchover start time:                   01-Aug-13 08:06:36
physical to primary switchover finish time:                  01-Aug-13 08:06:59
primary services offline for:                                      +00 00:00:23

SUCCESS: The physical rolling upgrade is complete

if we look at the statistics above, the key point to note is how long the application or database was down. 

In this test I had started the rolling upgrade on one day and then continued it the next day. That accounts for the 23 odd hours wait time for user upgrade figure.

But if we see the actual downtime which was over the two separate switchovers, one was for 12 seconds and the other was for 23 seconds which gives us a total actual downtime figure of 35 seconds.
Now connect to the original primary database and check if the database role is what it originally was
SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PRIMARY

Check last change made has been applied

SQL> select distinct cust_city from mycustomers;

CUST_CITY
------------------------------
Timbuktu

Lastly, shutdown standby and start managed recovery

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.
Total System Global Area 801701888 bytes
Fixed Size 2293496 bytes
Variable Size 314573064 bytes
Database Buffers 478150656 bytes
Redo Buffers 6684672 bytes
Database mounted.
Database opened.
SQL> recover managed standby database using current logfile disconnect;
Media recovery complete.

Ahmed Hazzaf

No comments: