Posts

Showing posts with the label Backup and Recovery

How to create an external table in Oracle

To create an external table in Oracle, you need to perform the following steps: Connect to the Oracle database server as a user with the CREATE TABLE privilege. You can do this using SQL*Plus or any other Oracle client. Create a directory object that points to the location of the data file on the file system. You can do this using the following SQL statement: CREATE DIRECTORY my_dir AS '/path/to/data/files'; Grant read permission on the directory object to the user who will be querying the external table. You can do this using the following SQL statement: GRANT READ ON DIRECTORY my_dir TO my_user; Create the external table using the CREATE TABLE statement. The table definition should include the ORGANIZATION EXTERNAL clause and a LOCATION clause that specifies the directory object and file name. CREATE TABLE my_table (    col1 NUMBER,     col2 VARCHAR2(20),     col3 DATE ) ORGANIZATION EXTERNAL (  TYPE ORACLE_LOADER ...

Setting up RAC in Oracle 21c

Oracle Real Application Clusters (RAC) is a feature of Oracle Database that allows multiple computers to run Oracle RDBMS software simultaneously while accessing a single database, providing high availability and scalability. Here is a simple guide on how to set up a RAC environment in Oracle 21c: Install Oracle 21c on multiple servers that will be part of the RAC cluster. Make sure to select the "RAC" option during the installation process. Configure the Oracle Grid Infrastructure on all servers in the RAC cluster. This includes setting up the Oracle Clusterware, creating a voting disk and OCR (Oracle Cluster Registry) file, and configuring the network interfaces. Create a database instance on each server in the RAC cluster. This involves creating a parameter file and starting up the instance using the Oracle Enterprise Manager. Create a database in the RAC environment using the Database Configuration Assistant (DBCA) or by manually creating the database using SQL command...

Oracle Pluggable Databases

Oracle Pluggable Databases (PDBs) are a feature of Oracle 12c and above that allow a single Oracle instance to host multiple, independent databases within it. This can be useful in a number of different scenarios, and can help organizations to better manage their database infrastructure and resources.  PDB for development One common use case for Oracle PDBs is for testing and development purposes. PDBs allow organizations to create multiple copies of their production databases, each with its own unique set of data and schema objects. This can be useful for testing new applications or making changes to the database schema without affecting the production environment. PDB for consolidation Another use case for Oracle PDBs is for consolidation of multiple databases onto a single server. This can help to reduce the number of servers required to support an organization's database infrastructure, and can also make it easier to manage and maintain those databases. PDBs can also be use...

Import tables with data pump

You can import specific tables with data pump by using the TABLES parameter . impdp user/pwd directory=MY_DIR dumpfile=export.dmp logfile=import.log tables=schema1.table_name1,schema2.table_name2

Use data pump to extract a package from a dump file

Here's the only method I've found for using Data Pump to extract a package from a dump file.  You don't actually import it directly to a database.  Impdp creates a sql file with the source code for re-creating the package. impdp username/password directory=MYDIR dumpfile=export.dmp sqlfile=mysql.sql This will get all the source code for every package in your dump file.

How to import schemas using datapump

Using datapump to import one or more schemas in an Oracle database is very easy. dpimp user/password dumpfile=file.dmp schemas=SCOTT,USER2 directory=DMP_DIR

RMAN backups, archive logs, and Oracle Streams - proper backup and keeping in synch

If you have Streams configured in your Oracle database environment, you probably know that RMAN is "Streams aware", meaning, it knows which archive logs Streams still needs and won't remove those from disk once backed up. There are however RMAN commands that will remove archive logs even if they're still needed by Streams. The problem I encountered the other day was with this RMAN command to backup my archive logs, which I run every 15 minutes: run { backup archivelog all delete input; } My archive logs were removed from disk after being backed up, but not if needed by Streams.  The problem was my backup files were getting cumulatively larger throughout the day, because every time it ran, RMAN backed up every archive log still on disk.  This quickly filled the disk. I only wanted to backup archive logs that had not already been backed up, so I tried this: run { backup archivelog all not backed up 1 times delete input; }  My archive backups were no...

Corrupted Free-Space in Oracle Database

Several weeks ago we had major corruption across the file-system on our production database server.  Everything was affected--Oracle home files, database files, logs...everything. After I restored the database, I used RMAN to search for corruption in the instance. RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL; This command will validate all database and archive log files that would be backed up, checking for physical and logical corruption.  No backup is actually done with the command. When corruption is found, it can be seen in  V$DATABASE_BLOCK_CORRUPTION . My output was the following: FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTION_TYPE ---------- ---------- ---------- ------------------ ------------------------------------ 15 2061570 126 9709077348 NOLOGGING 15 2061442 126 9709084124 NOLOGGING Corruption was found, but the corruption type wasn't shown.  I also couldn't find the objects that were corrupted. SELECT owner, segment_...

Use rsync on Unix to backup your archive logs

Rsync is a useful Unix tool for keeping directories in synch between two server.  I use it to copy Oracle archive logs from a production server to a test server. This command runs on my test server from the oracle user's crontab every 5 minutes.  It copies the files via SSH.  New files from the source directory are copied to the test backup directory.  If a file is deleted from the source directory, it is NOT removed from the target directory.  I have it setup this way because I want to control that manually.  I have a separate cronjob that purges archive log files over 3 days old. rsync -rave "ssh -l oracle" sourceServer:/u06/oradata/PROD/arch  /u06/oradata/prodArchBkp Rsync also preserves the timestamp of the source file in the target directory. You can also use rsync to delete files in the target directory that do not exist in the source directory. rsync -rave "ssh -l oracle" --delete sourceServer:/u06/oradata/PROD/arch  ...

DBVerify can delete contents of datafiles

I was running Oracle DBVerify against my datafiles post-recovery to look for any corruption.  Things were going fine until I made a mistake in the syntax of my command.  I accidentally set the logfile parameter to the datafile name.  DBVerify completely erased the contents of my datafile. system.dbf before my mistake: 770M Sep 7 23:42 system01.dbf I ran this command: dbv blocksize=8192 file=system01.dbf logfile=system01.dbf After dbverify:  0 Sep 8 12:37 system01.dbf Lesson learned.  Be very careful when cutting and pasting!

Disaster Recovery solution: Update

I tested our previously mentioned DR solution a few days ago. Here's what I did: First, I took a hotbackup of our production database (PROD) and moved it to our test server. I then created a new stand-by instance on the test server, started it up in nomount specifying the pfile, and altered it's DBID to match the DBID of PROD. This allows the DR database to apply the archived redo logs. RMAN> SET DBID XXXXXXXXXXXX; Next, I restored the controlfile from PROD's backup and mounted the stand-by database. RMAN> restore controlfile from '/oback/control_PROD_ctl'; RMAN> alter database mount; RMAN> exit; The next step was to restore the PROD datafiles. RMAN> restore database; After some archived redo logs appeared on PROD, I copied them to the test server and applied them using RMAN. RMAN> recover database; The database recovered using all of the available archived redo logs. It did throw an error saying it was looking for the next log which had not bee...

Disaster Recovery solution

My colleagues and I are currently working on a disaster recovery solution that would allow us to bring a mirror up of our production database off campus within minutes of it failing. The best DR solution would be to implement Oracle's Data Guard , however we aren't licensed for it because of the cost. So...our Unix administrator is going to use a program that will copy an entire mount-point anytime a file is changed, added, or removed. We're going to multiplex the files we want mirrored to a new mount-point so we don't grab a bunch of unnecessary things: the controlfiles, online redo-logs, and archived redo-logs. The datafiles will be copied over every night. We should be able to then recover the stale datafiles using the redo-logs that accumulated during the day. Anything I might be missing? I'll post an update when we've tested.

RMAN Restore vs. Recovery - what's the difference?

RESTORE is used to restore the datafiles from a backup. You can restore up to the time of the backup or a point before using SET UNTIL . You would lose any changes made to the database after the backup by performing only a RESTORE. RECOVER is used to roll the database forward after a RESTORE by applying the redo logs. In essence, RECOVER = RESTORE + redo logs. The following example assumes you have lost your control files and datafiles, but still have your redo logs. $ rman target / nocatalog RMAN> startup nomount; RMAN> restore controlfile from '/oback/controlfile'; RMAN> alter database mount; RMAN> restore database; RMAN> recover database; RMAN> alter database open resetlogs;