Tuesday, July 12, 2011

Running Notes From Moid Bhai's Class


Creating Databases:

Options:
1. Dbca
2. Manually using Scripts


May 8, 2011

LOGIN PROCESS:

1)      Putty
2)      Mujeeb/abc123
3)      Sudo su – oracle
4)      Export ORACLE_SID=<DB NAME>
5)      Sqlplus / as sysdba



SQL>grant sysdba to moid; (Granting SYS user  privileges to Moid)

SQL>grant dba to moid (granting System user privileges to Moid)


SQL>Show parameter processes (Shows all the process parameters)

SQL>show parameter name (Shows the DB name or everything that contains the word name)


SQL> show parameter spfile (To Find out with what my database started weather with spfile or with init.ora)




May 14, 2011

512 bytes = 1 Linux Block = ½ KB

16 of linux blocks are combined to form ORACLE BLOCK

512 bytes X 16 = 8192 bytes which is equal to 8KB which is equal to 1 ORACLE BLOCK.

Instead of 8096 its 8192 bytes which is equal to 1 Oracle Block.

When bunch of Oracle blocks are combined it forms an “Extent”.
8 oracle blocks = 1 extent

First extent ID=0

When Extents are combined it forms “Segment”.

When all the Segments are combined it forms a “Table Space”

All the Table Spaces combined together forms the “Database”

The default TBS’s are SYSTEM, UNDO, SYSAUX, TEMP, USERS. (The Users TBS is default from version 10g onwards)

When user creates a table, the segment is saved with the same table name.
The number of Segments are always equal to the number of tables in the database.

The extents are expanded in 2 ways.
1)Uniformly (expands the expand uniformly whenever required with equal amount of blocks.)
2) Automatic Extent Management.


SQL>select owner, segment_name, bytes/1024 from dba_segments where owner='SCOTT';

SQL>SELECT COUNT(*) FROM T111;

SQL>Delete t111;


With ASSM the default segment size is 64 KB.

SQL>Select owner, segment_name, extent_id, block_id, bytes/1024 from dba_extents where segment_name like ‘T%’;
May 15, 2011

Table Space


DD View V$Tablespace
V$datafile

SQL>select name from v$tablespace;

CREATE TABLESPACE
$ cd /u14/oradata/______/_____.dbf
SQL>create tablespace apptbs datafile ‘/u14/oradata/PrimeDG/AppTBS01.dbf’ size 100m ;

AUTOEXTEND ON
MEXT 100M
MAXSIZE 4096M

(10g)Extent management local autoallocate
(11g)segment space management auto;

NOTE:The data file can grow upto a maximum 30 GB in Oracle 10g

Create tablespace apptbs datafile ‘/u14/oradata/PrimeDG/Apptbs01.dbf’ size 100m autoextend off extend management dictionary uniform size 10m segment space management manual;


CHECK HOW MANY DATABASES ARE UP
$ CAT /ETC/ORATAB8

ADD A DATA FILE TO TABLESPACE
SQL>alter tablespace <tbs name> add datafile

REDUCE THE SIZE OF DATAFILE
SQL>alter database datafile ‘/u14/oradata/PrimeDG/devtbs_mujeeb02.dbf’ resize 30m;

SQL>select tablespace_name, file_name, bytes/1024/1024 as size_in_MB from dba_data_files order by file_name;
SQL>select tablespace_name, file_id, file_name, round((bytes/1024/1024),2) as Size_in_Mb from dba_data_files order by 1






NOTE: Break on <column name> skip 1
Compute sum of <column name> on <column name>

SQL>alter database datafile ‘/u14/oradata/PrimeDG/devtbs_mujeeb02.dbf’ resize 40m;
SQL>create user apple1 identified by abc123 default tablespace apptbs;
SQL>alter user dev1 default tablespace devtbs;



May 20, 2011



SQL>Select ‘Drop user’||username||’cascade;’ from dba_users where username like ‘App%’ or username like ‘dev%’;








SQL>create table scott.t100 as select * from scott.emp;

SQL>analyze table scott.t100 compute statistics; (analyzing forcefully as by default oracle analyzes from Night 10:00 pm to Morning 6:00 AM. But we can force Oracle to analyze it if we don’t want to wait until then)

SQL>set timi on; (this will show the time taken to retrieve)
SQL>select empno, ename, sal from scott.t100 where empno=7876;
SQL>set linesize 200;
SQL>set autotrace traceonly explain;
SQL>select empno, ename, sal from scott.emp where empno=7876;

May 21, 2011

Every table has two invisible columns namely rowid and rownum

SQL>select rowid, rownum, ename from e1;


SQL>sho parameter sga;

SQL> sho parameter pga;

SGA_MAX_SIZE is static parameter
SGA_TARGET AND PGA_AGGREGATE_TARGET are dynamic parameters

REDUCE THE SGA_TARGET VALUE
1. Change the value temporarily(change the value in current session only. All future sessions after reboot of DB, the old value will take over)
SQL>alter system set SGA_Target=150m scope=memory;
SQL>sho parameter sga_target;
SQL>shut immediate
SQL>startup
SQL>sho parameter sga_target;

2. Change the value in all future sessions.
SQL>alter system set sga_target=150m scope=spfile;
SQL>sho parameter sga_target;
SQL>shut immediate;
SQL>startup
SQL>sho parameter sga_target;

3. Change the value in current and all future sessions.
SQL>alter system set sga_target=150m scope=both;
SQL>sho parameter sga_target;
SQL>shut immediate;
SQL>startup
SQL>sho parameter sga_target;












RECOVERING FROM CRASHED SP FILE
SQL>login
SQL>export ORACLE_SID=MujeebDB
SQL>echo $ORACLE_HOME/dbs
SQL>sqlplus / as sysdba
SQL>create pfile=’/tmp/initMujeebDB.ora’ from spfile;

$ mv $ORACLE_HOME/dbs/spfileMujeebDB.ora $ORACLE_HOME/dbs/spfileMujeebDB.ora.bad

SQL>startup pfile=’/tmp/initMujeebDB.ora’
SQL>exit
SQL>sqlplus / as sysdba
SQL>create spfile=’/u14/app/oracle/product/10/2/0/db_1/spfileMujeebDB.ora’ from pfile=’/tmp/initMujeebDB.ora’;
SQL>shut immediate;
SQL>startup
SQL>



Changing the parameters one by one(There is another method also to do this all at once that will be taught to us later)

SQL>alter system set <PARAMETER NAME>=<VALUE> <OPTIONS IF ANY>=<VALUE>
SQL>alter system set Processes=450;

Processes                                             → 150/30                450
db_writer_processes                           → ½                        6
open_cursors                                       → 300                     800
db_file_multitables_read count          → 16                       64
sessions                                               → 170                     1200
sessions_cached_cursor                      → 20                       40
control_file_record_keep_time           → 7                         14
cursor_sharing                                     → exact                   similar
undo_retention                                    → 900                     1200
job_queue_processes                           → 10                       12






::::::::::::::::::::::::::::EXPORT IMPORT ::::::::::::::::::::::::::::::

SQL>select username from dba_users where username like 'sc%' or username like 'dev%' order by 1;

EXPORTING
Full Database
Table level
Schema level

Export at OS level

$ export ORACLE_SID=PrimeDG
$ mkdir –p  /u18/$ORACLE_SID/Full/Full_PrimeDG_052111.dmp
$ mkdir –p /u18/$ORACLE_SID/Schema
$ mkdir –p /u18/$ORACLE_SID/Table

$exp system  file=scott_05211.dmp log=scott_05211.log owner=scott statistics=none

$ exp “ ’/ as sysdba’ ”  file=scott_05211.dmp log=scott_05211.log owner=scott statistics=none

(call the export utility with the user “SYSTEM”, exp is the export utility of Oracle)
Exp = call the export utility
System = with the following privileged username
File=scott_05211.dmp  = this is the extract file from database to operating system
Log=scott_05211.log = log file for this export
Owner=scott = schema owner
Statistics=none = do not export statistics

IMPORTING
1. IMPORT THE WHOLE DUMP FILE
$ imp “’/ as sysdba’” file= scott_05211.dmp log= scott_05211_imp.log fromuser=SCOTT touser=dev1


2. IMPORT ONLY EMP TABLE FROM THE DUMP FILE
$ imp “’/ as sysdba’” \ ¿
file=scott_05211.dmp \¿
scott_05211_imp_tableonly \ ¿
fromuser=scott \¿
touser=dev2 \¿
tables=(emp)¿


May 22, 2011

TASK LIST::::
  1. Create FH_Dev schema
Create user in shibaDB named fh_dev1
grant connect, resource, unlimited tablespace, create synonym, create role to fh_dev1;

SQL>grant connect, resource to fh_dev1 identified by abc123;
SQL>alter user fh_dev1 default tablespace XXXXXXX;

  1. Copy the contents from scott to FH_Dev
Copy the contents of scott to user fh_dev1
SQL>
scp 031611 192.168.0.202:/u18/ShibaDB/Schema/
  1. Export FH_Dev
Export the schema of fh_dev1
SQL>exp "'/ as sysdba'"  file=fh_dev1.dmp log=fh_dev1.log owner=scott statistics=none

  1. scp the file the QA Server

  1. create new schema for QA
create user fh_qa1 in MujeebDB

  1. Import the data
Import






Send from QA to 231 and then import to PrimeDG











$Login to DB as oracle
$Export ORACLE_SID=ShibaDB
SQL>Create tablespace for developers
SQL>Create user called fh_dev
SQL>Give proper permissions and grants
SQL>exit

$ mkdir –p /u18/ShibaDB/Schema

Export
SQL> exp "'/ as sysdba'"  file=scott.dmp log=scott.log owner=scott statistics=none

IMPORT TO FH_DEV
SQL>imp "'/ as sysdba'"  file=scott.dmp log=scott_IMP.log fromuser=SCOTT touser=fh_dev;

$ cd /u18/ShibaDB/Schema

NOW EXPORT FH_DEV SCHEMA
SQL> exp "'/ as sysdba'"  file=fh_dev.dmp log=gh_dev.log owner=fh_dev statistics=none

NOW COPY THE FH_DEV.DMP TO MUJEEBDB SERVER
Ssh 192.168.0.202 mkdir –p /u18/MujeebDB/Schema
Scp fh_dev* 192.168.0.202:/u18/MujeebDB/Schema/.


ON MUJEEBDB
create user fh_qa identified by abc123 default tablespace qatbs; 
give all the required grants.

NOW IMPORT THE DUMPS
$ cd /u18/MujeebDB/Schema
Imp "'/ as sysdba'"  file=fh_dev.dmp log=fh_qaIMP.log fromuser=SCOTT touser=fh_qa;










June 25, 2011

DATA PUMP

1. CREATE DIRECTORIES AT O/S LEVEL
$mkdir –p /u18/smdb/Full
$mkdir –p /u18/smdb/Schema
$mkdir –p /u18/smdb/Table

2. CREATE DIRECTORIES AT DATABASE LEVEL AND POINT TO OS DIRECTORIES:
SQL>Create directory Mujeeb_FullDir as ‘/u18/smdb/Full’;
SQL>create directory SchemaDIR as ‘/u18/smdb/Schema’;
SQL>create directory TableDIR as ‘/u18/smdb/Tables’;

3. TAKE AN EXPORT OF SCHEMAS OF USERS SCOTT AND DEVRY.
$expdp “’/ as sysdba’” directory=SchemaDIR dumpfile=scott_june25.dmp logfile=scott_june25.log schemas=scott, devry

4. TAKE FULL DATABASE EXPORT
$expdp “’/ as sysdba’” directory=Mujeeb_FullDir dumpfile=FULL.dmp logfile=FULL.log FULL=y parallel=10 job_name=Mujeeb_Full

$expdp “’/ as sysdba’” directory=FULLDIR dumpfile=FULL.dmp logfile=FULL.log Schemas=SCOTT parallel=2 job_name=Scott_job 

$expdp “’/ as sysdba’” directory=FULLDIR dumpfile=FULL.dmp logfile=FULL.log FULL=y parallel=10 job_name=Mujeeb_Full



5. CHECK THE RUNNING JOB STATUS
SQL>set linesize 200;
SQL>set pagesize 200;
SQL>Select owner_name, job_name, state from dba_datapump_jobs;







June 26, 2011

FULL EXPORT ON PARALLEL PROCESS
Expdp \
“’ as sysdba’”\
Directory=FULLDIR \
Dumpfile=full_062611_%U.dmp \
Logfile= full_062611.log \
Full=y \
Parallel=10 \
Job_name=Full_job_1

$ expdp "'/ as sysdba'" directory=Mujeeb_FullDir dumpfile=FULL_june_%U.dmp logfile=FULL_june.log FULL=y parallel=10 job_name=Mujeeb_Full_1

When using more than one process(parallel processing), they system will overwrite the same file again and again the number of times the parallel is mentioned. To overcome this we need to assign the file name with %U then the system will assign a unique number if we use %U

full_062611_%U.dmp
full_062611_01.dmp
full_062611_02.dmp
full_062611_03.dmp
full_062611_04.dmp
full_062611_05.dmp
full_062611_06.dmp
full_062611_07.dmp
full_062611_08.dmp
full_062611_09.dmp
full_062611_10.dmp


SCHEMA LEVEL

Expdp \
“’ as sysdba’”\
Directory=FULLDIR \
Dumpfile=full_062611_%U.dmp \
Logfile= full_062611.log \
Full=y \
Parallel=10 \
Job_name=Full_job_1

$expdp “’/ as sysdba’” directory=FULLDIR dumpfile=FULL.dmp logfile=FULL.log Schemas=SCOTT parallel=2 job_name=Scott_job 






IMPORTING ONE SCHEMA FROM FULL EXPDP

Impdp \
“ ‘/ as sysdba’” \
directory=fulldir \
dumpfile=full.dmp \
logfile=scott_mary_imp.log
schemas=SCOTT
remap_schema=SCOTT:mary

$ impdp \
"'/ as sysdba'" \
directory=Mujeeb_FullDir \
dumpfile=FULL_june_%U.dmp \
logfile=scott_mary_imp.log \
schemas=SCOTT \
remap_schema=scott:mary \
job_name=sm_1


expdp "'/ as sysdba'" directory=Mujeeb_FullDir dumpfile=FULL_june_%U.dmp logfile=FULL_june.log FULL=y parallel=10 job_name=Mujeeb_Full_1


Expdp \
"' as sysdba'"\
Directory=MUJEEB_FULLDIR \
Dumpfile=full_062611_%U.dmp \
Logfile= full_062611.log \
Full=y \
Parallel=10 \
Job_name=Full_job_1




$impdp \
"'/ as sysdba'" \
directory=Mujeeb_FullDir \
dumpfile=FULL_june_%U.dmp \
logfile=scott_mary_imp.log \
schemas=SCOTT \
remap_schema=scott:mary \
job_name=sm_1

SQL>alter user mary identified by abc123;
SQL>alter user mary account unlock;


IMPORT THE STRUCTURE ONLY
$impdp \
"'/ as sysdba'" \
directory=Mujeeb_FullDir \
dumpfile=FULL_june_%U.dmp \
logfile=scott_mary_imp.log \
schemas=SCOTT \
remap_schema=scott:mary3 \
content=metadata_only \
job_name=sm_3



IMPORT THE DATA NOW
$impdp \
"'/ as sysdba'" \
directory=Mujeeb_FullDir \
dumpfile=FULL_june_%U.dmp \
logfile=scott_mary_imp.log \
schemas=SCOTT \
remap_schema=scott:mary3 \
table_exists_action=append \
job_name=sm_4
remap_tablespace=temp2:temp (use this if remapping the tablespace also)

TRY WITH TRUNCATE
$impdp \
"'/ as sysdba'" \
directory=Mujeeb_FullDir \
dumpfile=FULL_june_%U.dmp \
logfile=scott_mary_imp.log \
schemas=SCOTT \
remap_schema=scott:mary3 \
table_exists_action=truncate \
job_name=sm_5
July 2, 2011

BUFFER CACHE

The process coming from outside is USER PROCESS and the process that is running in oracle is the server process.


PMON:-
SMON:- Looks for any orphan processes and kills them

SGA




















Thick Client: All the binary files sitting on local machine is known as Thick Client.
(Example Yahoo messenger installed as sole application)

Thin Client: The binary files are on server. (example Yahoo messenger embedded in Yahoo mail to be browsed from web browser)

Buffer Cache
Free Buffer
Pinned Buffer
Dirty Buffer
The Buffer Cache flushes out the dirty blocks when the threshold point has reached. The default flushing takes place for every 3 seconds.

Anything read from the disk is known as HARD PARSING

Reading directly from the memory is SOFT PARSING or Logical Read.

DIRTY BLOCKS
  1. Hot Blocks
  2. Cold Blocks
Hot Blocks are those blocks are used for most number of times and Cold blocks are the ones that are not in use anymore. Hot Blocks are the Most Recently used Blocks(MRU) and Cold Blocks are the Least Recently Used Blocks(LRU)

$ ps -ef | grep $ORACLE_SID

BG Processes:
CKPT: The CKPT wakes the DBWR for every 3 seconds.

DBWRn (n can be between 0 to 9)This bg process basically writes the information to the databases. It invokes by itself for every 3 seconds.

MMAN: This bg process balances and rebalances the memory.

MMON: it constantly monitors the memory and will be in connection the MMAN process for allocating and deallocating the memory.

SQL>Sho parameter job
SQL>Sho parameter writer
SQL>Sho parameter process
SQL>sho parameter sga


THE WHOLE CONNECTION PROCESS:
·          
·         The user from the client machine is sending the request (querying the Database)
·         Before the query is forwarded to the server, the syntax error check is done by the sqlplus application at the client machine only and then if there were no error, the request is forwarded to the server.
·         After connection the request comes to sga in Data Dictionary cache and checks if the table exist and the column exist

·         Then it goes to library cache and checks if the query is executed previously 

·         Then it goes to buffer cache and looks for free blocks, after getting free blocks it pins them and then

·         It goes to database and locks the blocks that has data and gives back to the Buffer Cache

·         Then the data is given back to Library Cache which updates itself and gives back the query result to Library Cache.


Before it updates the value it sends the old value to UNDO TBS and then it updates the Old and New values in Redo Cache and then updates the Buffer Cache with old and new and values.

SYSTEM CHANGE NUMBER (SCN)
Any change in the Database is given a number known as SCN Number. Any change means even simple select statements are also given the SCN number.



REDO LOG INFORMATION DETAILS
SCN#
Old Value
New Value
Who Executed
Time it got Executed
Time it got committed


NO ARCHIVE LOG MODE
ARCn: The bg process responsible for writing the Archived Redo Log Files

RECO: The bg process that helps in Recovery from Redo logs

ARCHIVE LOG MODE


CHECK IF THE DATABASE IS IN ARCHIVE LOG MODE OR NOT:
SQL> select log_mode from v$database;

FIND WHERE ARE MY REDO LOG FILES
SQL>select * from v$logfile







July 4, 2011

Archive redo log basically contains CHANGED VECTORS


BACKUPS


COLD
Alter system switch logfile;
Sho parameter spfile find initialization file location
Sho parameter control find Control file location

Select name from dba_datafiles; to find the data files

Select name from dba_temp_files To find Temp file location

Select member from v$logfile find the redo log file location

Shut immediate



SQL>alter system set log_archive_dest_1='LOCATION=/u99/smdb/archive';
$ mkdir -p smdb/archive
$ cd smdb
$ cd archive
$ pwd
/u99/smdb/archive

1)SPFILE
 /u01/app/oracle/product/10.2.0/db_1/dbs/spfilesmdb.ora

2)CONTROL FILE
 /u01/app/oracle/oradata/smdb/control01.ctl,
/u01/app/oracle/oradata/smdb/control02.ctl,
/u01/app/oracle/oradata/smdb/control03.ctl

3)DATA FILES
/u01/app/oracle/oradata/smdb/users01.dbf
/u01/app/oracle/oradata/smdb/sysaux01.dbf
/u01/app/oracle/oradata/smdb/undotbs01.dbf
/u01/app/oracle/oradata/smdb/system01.dbf
/u01/app/oracle/oradata/smdb/example01.dbf
/u02/oradata/smdb/DeVry01.dbf

4)TEMP FILES
/u01/app/oracle/oradata/smdb/temp01.dbf

5)REDO LOG FILES
/u01/app/oracle/oradata/smdb/redo03.log
/u01/app/oracle/oradata/smdb/redo02.log
/u01/app/oracle/oradata/smdb/redo01.log

6)ARCHIVED LOG FILES
sho parameter log_archive_dest

7)LOCATION OF DBA_DIRECTORIES
select name, path form dba_directories

select directory_name directory path from dba_directories;

8)FIND LOCATION OF ALL DUMP FILES
adump, bdump, cdump, ddump
select value from v$parameter where value like '%_dest';

9)alter system switch logfile

10)shutdown immediate

11)LISTENER PARAMETER FILE AND TNS FILES
cd $TNS_ADMIN or
cd $ORACLE_HOME/network/admin
and copy
listener.ora
tnsnames.ora
sqlnet.ora


SECTION II
LOCAL MACHINE IS 119:::::::::::::::::::::::::::::::::::::::::::REMOTE MACHINE IS 202

1) Copy all the important and optional files to a local destination and remote destination
Local destination is

$ mkdir -p /u99/$ORACLE_SID/backup/07-04-2011

SECTION III:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
$ cp /u99/$ORACLE_HOME/dbs/initsmdb.ora /u99/$ORACLE_SID/backup/07-04-2011

$ ssh 192.168.0.206 mkdir -p /u99/$ORACLE_SID/backup/07-04-2011 (as soon as the connection establishes create a directory)

$ scp -r /u99/$ORACLE_SID/backup/07-04-2011 192.168.0206:/u99/$ORACLE_SID/backup/07-04-2011/.




























July 9, 2011

Fractured Block

HOT BACKUP
1)      Get the location of all the files
2)      Put TBS in Begin Backup Mode   
SQL>alter tablespace <tbs_name> begin backup; (for a particular Tbs)
SQL>alter tablespace begin backup (for all tablespaces in one go)

3)      Copy the Data files of the TBS which is offline
4)      Put the TBS in End Backup Mode
5)      Repeat the same for all TBS’s






RMAN

$ rman target /
Login as sysdba

RMAN>show all;

RMAN Types of Backups
Full
Incremental
Cumulative

RMAN>list backup;

RMAN> delete backup;

RMAN>backup database plus archivelog;

RMAN>

July 10, 2011

RMAN>backup database;

RMAN>backup database plus archivelog;

RMAN>list backup;

RMAN>backup archivelog all;

Will not prompt user intervention
RMAN>delete no prompt archivelog all;

Delete forcefully from your repository
RMAN>delete force noprompt archivelog all;

Delete Archive logs from past 48 hours old means those older than 2 days
RMAN>delete archivelog until time sysdate – 2;

Delete Archive logs that are backedup
RMAN>backup archivelog all delete input;

RMAN>delete archivelog;

RMAN>crosscheck backup;

RMAN>delete expired backup;

Delete the backups if they are out of retention policy:
RMAN>delete obsolete;

Backup those Archived logs that are not backuped up even one time
RMAN>backup database plus archivelog not backedup 1 times;

Backup database, backup archive logs not backed up 1 times.
RMAN>backup database plus archvielog not backed up 1 times delete all input;

BAKCUP CONTROL FILE
RMAN>backup current controlfile;

BACKUP SPFILE
RMAN>backup spfile

LIST THE BACKUP LOCATION OF CONTROL FILE
RMAN>list backup of controlfile;

LSIT THE BACKUP LOCATION OF SPFILE
RMAN>list backup of spfile;

RMAN>list copy of controlfile;

GIVES THE SIZE AND MAPPED TBS DETAILS
RMAN>report schema

CHECK IF BACKUPS ARE GOOD
RMAN>restore database validate;







Tuesday, July 5, 2011

ARCHIVE REDO LOGS

ARCHIVE LOG MODE

Archive log file (or Offline Redo log File)
Archiver bg process (ARCH0)


If the database is in Archive Log mode, for every Log Switch along with Checkpoint process one more optional bg process ARCHIVER is invoked which copies the filled up Redo Log File contents to the separate location specified by parameter LOG_ARCHIVE_DEST in the form of Archived or Offline Redo Log File.

Multiplexing of Archived Redo Log Files is possible upto 2 or 10 locations by specifying the parameter LOG_ARCHIVE_DEST_DUPLEX
LOG_ARCHIVE_DEST_n(1 – 9)

The naming convention for Archive Log Files is specified by parameter
LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
%t – Thread Number (Used in RAC)
%s – Log Sequence Number
%r – Reset Log ID

ENABLING or Converting Database in ARCHIVE LOG MODE:  

1) Shutdown the Database
SQL>archive log list
SQL>shut immediate (must be gracefully )
SQL>exit

2) Edit the Parameter File (SPFILE)
$ cd $ORACLE_HOME/dbs
$ vi init$ORACLE_SID.ora
  Log_archive_dest=/u99/smdb/archive #entry to be made in init$ORACLE_SID.ora file
     :wq!      #save the file

3)After saving the file create the physical directory
$ mkdir –p /u99/smdb/archive

4)Now Login to the Database with as sysdba in mount stage only (and not in open stage)
$sqlplus / as sysdba
SQL>startup mount
SQL>archive log list
or
SQL>select log_mode from v$database;

5) At this point the Archival will be started but the Database still will be in  NOARCHIVE LOG MODE  why are you sleeping PUT THE DATABASE IN ARCHIVE LOG MODE
SQL>alter database archivelog;

6) NOW OPEN THE DATABASE
SQL>alter database open;

Now lets check the bg process that takes care of archive logs is active or not (either we can see at $ prompt or with ! we can write the same command at SQL> prompt also)

SQL>!ps –x

Yes the bg process arc0, arc1 etc., are the ones that takes care of Archive logs.

Its time now to check if we got any redo log files archived or not.
SQL>alter system switch logfile;

Check if the redo file is archived or not
$ cd /u99/smdb/archive
$ls

Hurray! I see the archive log files. Now I can say that I can crash the database of my company and my boss will not mind it AT ALL! You Doubt? Try Yourself and let me know the outcome please!!!!


Related Dictionary views:
$ARCHIVED_LOG
$ARCHIVE_DEST
$ARCHIVE_PROCESSES