Wednesday, August 21, 2013

Upgrading Oracle 10g XE to Oracle 11g XE

Task:

The task is to upgrade Oracle 10g XE to 11g XE to support our growing application. The application tracks patient information for a government agency. The growing data has been causing performance issue for the present 10g XE. So, management decided to upgrade for 3 reasons, namely:

1. Government requirement that databases for any of its agencies must be operating in 11g or above;
2. For better performance as 10g XE is already dragging due to the explosion of the data; and
3. Oracle will stop supporting 10g XE, so it makes sense to upgrade.

Hardware:

Operating system is Windows XP while database is Oracle 10g XE. Database has 2 main schemas (patient & Doctor). These schemas must be migrated to the new database (11g XE) with all the database objects intact.


Step-by-step guide to Upgrade
 
1. Export schemas

exp   system/*****   owner=patient    file=PATIENT.IMP

exp   system/*****   owner=doctor   file=DOCTOR.IMP

 

2. Uninstall ORACLE XE 10g

go to "control Panel"

uninstall Oraclexe 10g

Delete folder c:\oraclexe\.....\BIN  (If this was not deleted by the uninstall operation)

 
3. INSTALL ORACLE 11g

Click on "Next" to start the installation



Accept the Licensing agreement

 

Enter password for SYS and SYSTEM users



Choose the location of the binary files. Use "browse" to control location



Let the installer finish installation



 

 
4. Create tablespaces for the user accounts

create tablespace doctor
datafile 'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\DOCTOR01.DBF' size 500m autoextend on next 50m maxsize unlimited;

create tablespace patient
datafile 'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\PATIENT01.DBF' size 500m autoextend on next 50m maxsize unlimited;

create temporary tablespace doctortemp
tempfile 'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\DOCTORTEMP01.DBF' size 100m autoextend on next 10m maxsize unlimited;

create temporary tablespace patienttemp
tempfile 'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\PATIENTTEMP01.DBF' size 100m autoextend on next 10m maxsize unlimited;

commit;

5. Create user accounts 

create user doctor identified by "oracle23678"
default tablespace doctor
temporary tablespace doctortemp
quota unlimited on doctor;

create user patient identified by "oracle23679"
default tablespace patient
temporary tablespace patienttemp
quota unlimited on patient;
commit;

6. Grant privileges to user accounts  

grant create session to patient;
grant create any table to patient;
grant create any sequence to patient;
grant create any procedure to patient;
grant create any synonym to patient;
grant create any index to patient;
grant create any trigger to patient;

grant create session to doctor;
grant create any table to doctor;
grant create any sequence to doctor;
grant create any procedure to doctor;
grant create any synonym to doctor;
grant create any index to doctor;
grant create any trigger to doctor;

commit;

 
7. Change password settings
 
You may need to set password parameter since Oracle 10g XE is not case sensitive. Please note that this stage is optional in the upgrade process.
As SYS, run this command:

alter system set SEC_CASE_SENSITIVE_LOGON = FALSE;

commit;

 
8. Create user accounts 

Import schema into Oracle 11g XE by using the import command.

imp system/***** file=patient.dmp fromuser=patient touser=patient

imp system/***** file=doctor.dmp fromuser=doctor touser=doctor

 
Once the import is complete, check if the schema objects are correctly imported from the dumpfile. You will then need to take backup of your Oracle 11g XE database using RMAN or Export.

Thanks for reading.

EXP-00091 Exporting questionable statistics

Problem:

We were trying to upgrade from 10g to 11g. After running an export of the schemas from the 10g database, we installed Oracle 11g and tried to import the schema back into the new database (11g). Then we ran into error EXP-00091 Exporting questionable statistics.












Cause:

The "EXP-00091 Exporting questionable statistics" error occurs when a DBA tries to export table data with its related optimizer statistics, and Oracle cannot verify the currency of these statistics. Please note that Oracle has to verify statistics whenever it does an export; otherwise this error will be thrown.
DBMS_STATS controls this process. You can always skip and let Oracle handle the statistics after the data has been imported into the new database. This process is known as statistics recalculation.

Solution:

Run 'exp' with "statistics=none" option to skip the statistics collection and bypass the error. For instance:

exp scott/tiger file=scott.imp log=scott.log statistics=none

This will create a dumpfile with the name "scott.imp" and the statistics will be ignored.

 

Thursday, August 15, 2013

Datapump import/export

Datapump import/export are used for data migration. You can export TABLES, SCHEMA, TABLESPACE or DATABASE and restore it on another server.

Quickfacts:

1. Datapump produces a dumpfile with extention .dmp or .imp
2. Privilege 'exp_full_database' is required to export full database
3. Privilege 'imp_full_database' is required to import full database
4. In datapump, you can user a parfile to store your commands

Getting the help page

expdp help=y

Export: Release 10.1.0.2.0 - Production on Tuesday, 23 March, 2004 8:33

Copyright (c) 2003, Oracle.  All rights reserved.


The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

   Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:

   Format:  expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
   Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword               Description (Default)
------------------------------------------------------------------------------
ATTACH                Attach to existing job, e.g. ATTACH [=job name].
CONTENT               Specifies data to unload where the valid keywords are:
                      (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY             Directory object to be used for dumpfiles and logfiles.
DUMPFILE              List of destination dump files (expdat.dmp),
                      e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE              Calculate job estimates where the valid keywords are:
                      (BLOCKS) and STATISTICS.
ESTIMATE_ONLY         Calculate job estimates without performing the export.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE              Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FLASHBACK_TIME        Time used to get the SCN closest to the specified time.
FULL                  Export entire database (N).
HELP                  Display Help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of export job to create.
LOGFILE               Log file name (export.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile (N).
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to export a subset of a table.
SCHEMAS               List of schemas to export (login schema).
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
TABLES                Identifies a list of tables to export - one schema only.
TABLESPACES           Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK  Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.

expdp help=y
=====================================================================================
Export: Release 10.1.0.2.0 - Production on Tuesday, 23 March, 2004 8:33
Copyright (c) 2003, Oracle.  All rights reserved.

The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:
   Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:
   Format:  expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
   Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
USERID must be the first parameter on the command line.
Keyword               Description (Default)
------------------------------------------------------------------------------
ATTACH                Attach to existing job, e.g. ATTACH [=job name].
CONTENT               Specifies data to unload where the valid keywords are:
                      (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY             Directory object to be used for dumpfiles and logfiles.
DUMPFILE              List of destination dump files (expdat.dmp),
                      e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE              Calculate job estimates where the valid keywords are:
                      (BLOCKS) and STATISTICS.
ESTIMATE_ONLY         Calculate job estimates without performing the export.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE              Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FLASHBACK_TIME        Time used to get the SCN closest to the specified time.
FULL                  Export entire database (N).
HELP                  Display Help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of export job to create.
LOGFILE               Log file name (export.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile (N).
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to export a subset of a table.
SCHEMAS               List of schemas to export (login schema).
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
TABLES                Identifies a list of tables to export - one schema only.
TABLESPACES           Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK  Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.
==============================================================================================

Example 1: Joe is a user on a database, Joe needs to export the whole database.
Solution:
c:\> sqlplus sys/**** as sysdba
     grant exp_full_database to joe;
     commit;
     exit;
c:\> expdp joe/**** full=y dumpfile=joe_database.dmp logfile=joe.log compression=n
^full database is exported
^Dumpfile will be created in the default location, which is 'c:\app\user\admin\orcl\dpdump'

Example 2: System needs to export Joe's schema with compression
Solution:
c:\> expdp system/**** full=n schemas=joe dumpfile=joe_schema.dmp logfile=joe1.log compression=y
^Joe's schema is exported
^Dumpfile will be created in the default location, which is 'c:\app\user\admin\orcl\dpdump'
^Dumpfile will be compressed

Example 3: Joe needs to export 2 tables (Student, Lecturer) to a location on disc ('c:\oracle')
Solution:
c:\> sqlplus joe/****
     create directory oracle as 'c:\oracle';
     commit;
     exit;
c:\> expdp joe/**** full=n tables=Student,Lecturer directory=oracle dumpfile=joe_tables.dmp logfile=joe2.log
^Joe's table are exported
^Dumpfile will be created in 'c:\oracle'
^There should be no space between the tables...e.g tables=(student1,student2,student3)

IMPORT

Example 4: Joe needs to import 2 tables (Student, Lecturer) from a directory called 'oracle'
Solution:
c:\> impdp joe/**** directory=oracle dumpfile=joe_tables.dmp tables=Student,Lecturer logfile=joe3.log
^Joe's table will be imported

Example 5: Joe needs to import 2 tables (Student, Lecturer) and change the tables name to 'student1' and 'student2'
Solution:
c:\> impdp joe/**** directory=oracle dumpfile=joe_tables.dmp logfile=joe3.log remap_table=joe.Student:student1,joe.Lecturer:student2
^Joe's table will be imported and renamed
^Tables must be separated by comma, the colon (:) must separate the oldname and new name

Example 6: System needs to import the whole database from a dumpfile on a flashdrive
Solution:
c:\> cd c:\app\user\admin\orcl\dpdump
Copy dumpfile to the folder
Launch impdp from this location
c:\app\user\admin\orcl\dpdump> impdp system/**** full=y dumpfile=database.dmp logfile=db.log

Example 7: System needs to import joe's schema from dumpfile (of full database) on a flashdrive
Solution:
c:\> cd c:\app\user\admin\orcl\dpdump
Copy dumpfile to the folder
Launch impdp from this location
c:\app\user\admin\orcl\dpdump> impdp system/**** full=n schemas=joe dumpfile=database.dmp logfile=db1.log
^Joe's schema will be imported


Example 8: System needs to import joe's schema and change name to DANIEL and also change the tablespace to daniel's tablespace
from dumpfile (of full database) on a flashdrive
Solution:
c:\> cd c:\app\user\admin\orcl\dpdump
Copy dumpfile to the folder
Launch impdp from this location
c:\app\user\admin\orcl\dpdump> impdp system/**** full=n dumpfile=database.dmp logfile=db2.log remap_schema=joe:daniel remap_tablespace=users:daniel_tbs
^Joe's schema will be imported and changed to Daniel
^Tablespace will be changed from users to Daniel_tbs
^you MUST physically create this tablespace and grant access to Daniel.

Example 9: System needs to import joe's schema using a parameter file
Solution:
create a parfile called 'joe.txt'
USERNAME=system/****
FULL=N
SCHEMA=(joe,daniel,class)
DIRECTORY=oracle
DUMPFILE=database.dmp
LOGFILE=db4.log
Save file and exit
c:\> impdp parfile=joe.txt

Parfile makes it more convenient to adjust datapump commands. It also help when you need to constantly run same commands.

Wednesday, August 14, 2013

Encrypting RMAN backup

Configuring encryption for Oracle Recovery Manager (RMAN) is always a good step in the right direction. There may be different reasons for encrypting RMAN backup.
  •  For greater security of data;
  • Customer requirement (e.g, most companies requires encryption for any database file containing SSN, Credit Card number, date of birth, etc); or
  • Complying with laws or regulations.
 
Option 1: Use global security wallet to encrypt backup

*configure the encryption wallet.

create a directory called "Wallet" in $ORACLE_BASE/admin/$ORACLE_SID

mkdir /home/oracle/app/oracle/admin/orcl/wallet
*Issue this command as SYS:
SQL> alter system set encryption key identified by "oracle1";
*Open the wallet:

SQL> alter system open encryption wallet identified by "oracle1";
*Log in to rman to encrypt backup

rman target /

RMAN> configure encryption for database on;

RMAN> backup database;


Option 2: Configure encryption right from RMAN

Another option is to configure encryption right from RMAN. You can also decide to use both options 1 and 2 together for double protection. Option one will be global because it controls every encryption done on the database including the backup. Option 2 is only restricted to RMAN prompt alone.

*Log in to RMAN and configure Encryption

RMAN target /

RMAN> set encryption on identified by "oracle1" only;

RMAN> backup database;

Note: You don't need a wallet to implement this. You can combine this with the global wallet transparent backup if you like. That will give you dual protection
You need to specify this password during recovery
  
Removing encryption
 
RMAN> configure encryption for database off;

Note: You need to set encryption off if at any point you don't need to encrypt your backup again. Some organizations will require you to remove encryption for data or databases that are only for TEST or DEVELOPMENT purpose.

Query encryption setting

Oracle provided a view to track encryption setting configured for recovery manager (RMAN). Issue this command as SYS:

select * from V$RMAN_ENCRYPTION_ALGORITHMS;

This is the view with all the details of rman encryption modes in the database. It is essential for a DBA to understand the encryption mode and their restrictions/limitations.

Thank you for reading.