Oracle Tutorials

In this Blog you can learn database, networking and cloud skills that will help you to understand Oracle Database and Non Oracle Database with extra key skill

Clone an Oracle Database using controlfile

No comments :

Clone of Oracle Database with new database name using control file

Source Database Name: DG731124 
Source Database Version: 11.2.0.4 Standard Edition
Source Datafile location: /u03/app/oracle/oradata/DG731124/
Source Logfile location: /u03/app/oracle/oradata/DG731124/
Target location of datafile and logfile: /u04/testdir/
Target Database name: TCLONE

SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/u03/app/oracle/oradata/DG731124/system01.dbf
/u03/app/oracle/oradata/DG731124/sysaux01.dbf
/u03/app/oracle/oradata/DG731124/undotbs01.dbf
/u03/app/oracle/oradata/DG731124/users01.dbf
/u03/app/oracle/oradata/DG731124/test011.dbf

SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
/u03/app/oracle/oradata/DG731124/redo03.log
/u03/app/oracle/oradata/DG731124/redo02.log
/u03/app/oracle/oradata/DG731124/redo01.log
SQL> create pfile='/tmp/initTCLONE.ora' from spfile;

Add tablespace, user and table to test the cloning of the database.
Connect to source database
Create tablespace test01 
SQL> create tablespace test01 datafile '/u03/app/oracle/oradata/DG731124/test011.dbf' size 15M;
Add user 'u11' to the database
SQL> create user u11 identified by ******;
SQL> grant connect ,resource , create session to u11;
SQL> alter user u11 quota unlimited on test01;
SQL> conn u11; 
Now user can create database object.
SQL> select * from tab;
TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
DEPT                           TABLE
EMP                            TABLE
SQL> alter system switch logfile;
Create control file 
SQL> alter database backup controlfile to trace;
SQL> shut immediate;
Go to the trace file location and rename recently create trace file as control.sql

cd /u03/app/oracle/diag/rdbms/dg731124/DG731124/trace
ls -lrt
cp DG731124_ora_12252.trc  control.sql 

Remove unwanted lines from the control file keep information like this:
--------------------------------------------------------------------------------------------------------
CREATE CONTROLFILE SET DATABASE "TCLONE" RESETLOGS  NOARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 '/u04/testdir/redo01.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 2 '/u04/testdir/redo02.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 3 '/u04/testdir/redo03.log'  SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  '/u04/testdir/system01.dbf',
  '/u04/testdir/sysaux01.dbf',
  '/u04/testdir/undotbs01.dbf',
  '/u04/testdir/users01.dbf',
  '/u04/testdir/test011.dbf'
CHARACTER SET AL32UTF8
;
RECOVER DATABASE;
-- Database can now be opened normally.
--ALTER DATABASE OPEN;
--ALTER TABLESPACE TEMP ADD TEMPFILE '/u04/testdir/temp01.dbf'
--     SIZE 30408704  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M;
----------------------------------------------------------------------------------------------------------
1- Use set option in create control file to set new name of the database.
2- Replace the old path of datafile and logfile with new location.
3- Edit the initTCLONE.ora replace the path and create following directory as described in pfile.
4- Shut down the source database.
5- copy the datafile, logfile, control file and pfile from source to target location '/u04/testdir/ 
-------------------------------------------------------------------------------------------------------------
export the oracle_sid for new database instance
# export ORACLE_SID=TCLONE
Start database in nomount state using pfile:
SQL>  startup nomount pfile='/u04/testdir/initTCLONE.ora'; 
Execute the control file 
SQL> @/u04/testdir/control.sql
Control file created.

ORA-00283: recovery session canceled due to errors
ORA-01610: recovery using the BACKUP CONTROLFILE option must be done.

SQL> recover database using backup controlfile until cancel;
ORA-00279: change 1258136 generated at 03/29/2019 10:47:01 needed for thread 1
ORA-00289: suggestion :
/u04/testdir/fast_recovery_area/TCLONE/archivelog/2019_03_29/o1_mf_1_9_%u_.arc
ORA-00280: change 1258136 for thread 1 is in sequence #9

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/u04/testdir/redo03.log
Log applied.
Media recovery complete.

Note:  Apply the logfile until the Media recovery get completed.
SQL> alter database open resetlogs;
Database altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/u04/testdir/temp01.dbf' SIZE 30408704  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M;

Clone database TCLONE get created in different location.

SQL> select name from v$database;
NAME
---------
TCLONE

SQL> SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/u04/testdir/system01.dbf
/u04/testdir/sysaux01.dbf
/u04/testdir/undotbs01.dbf
/u04/testdir/users01.dbf
/u04/testdir/test011.dbf

SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
/u04/testdir/redo03.log
/u04/testdir/redo02.log
/u04/testdir/redo01.log

SQL> conn u11;
SQL> select * from tab;
TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
DEPT                           TABLE
EMP                            TABLE


Installation of MongoDB using yum on Oracle Linux 6

No comments :
Image result for mongodb image

Installing MONGODB using yum on Oracle Linux 6

Reference: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/

1) Configure the package management system (yum)
Create a /etc/yum.repos.d/mongodb-org-4.0.repo file so that you can install MongoDB directly using yum:

# vi /etc/yum.repos.d/mongodb-org-4.0.repo

save the following content:

[mongodb-org-4.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc

2) Install the MongoDB packages
# yum install -y mongodb-org
or
To install a specific release of MongoDB, specify each component package individually
yum install -y mongodb-org-4.0.5 mongodb-org-server-4.0.5 mongodb-org-shell-4.0.5 mongodb-org-mongos-4.0.5 mongodb-org-tools-4.0.5

Note:  By default, MongoDB runs using the mongod user account and uses the following default directories:

/var/lib/mongo (the data directory)
/var/log/mongodb (the log directory)

[root@dgstby ~]# ls -lrt /var/log/mongodb/mongod.log
-rw-r----- 1 mongod mongod 4429 Jan 31 15:20 /var/log/mongodb/mongod.log

Start MongoDB 
service mongod start
Verify that MongoDB has started successfully
verify that the mongod process has started successfully by checking the contents of the log file at /var/log/mongodb/mongod.log
[initandlisten] waiting for connections on port <port>
where <port> is the port configured in /etc/mongod.conf, 27017 by default.

You can optionally ensure that MongoDB will start following a system reboot by issuing the following command:
# chkconfig mongod on
Stop MongoDB process
# service mongod stop
Restart the mongod process
# service mongod restart
Begin using MongoDB
# mongo

Useful Task in Linux

No comments :
Changing time zone 
# cd /etc/ 

# ls -lrt local-time*
# ln -s /usr/share/zoneinfo/Asia/Kolkata    /etc/localtime
Save entry in : vi /etc/ntp.conf

Server <ip>
service ntpd restart
Create NFS Share in linux
mkdir  /u05
vi /etc/exports

            u05  *(rw,sync)
mount -t nfs 10.10.10.XX:/u05 /u05/
To mount in other server
 Create /u05 directory
Add entry in /etc/fstab
Add entry in /etc/fstab of other server
10.10.10.74:/u05         /u05                   nfs    defaults         0 0
$ mount –a
unzip file at difference location in linux
Syntax:
unzip {.zip-file-name} -d {/path/to/extract}
For example extract package.zip into /u01/app/oracle/patches/, enter:

# unzip p11212_linux_x86_64.zip -d /u01/app/oracle/patches/

Adding swapsize in linux

No comments :












Adding 2GB of swapsize

[root@dgprim ~]# dd if=/dev/zero of=/u05/swapfile1 count=1M bs=2048
1048576+0 records in
1048576+0 records out
2147483648 bytes (2.1 GB) copied, 4.00654 s, 536 MB/s

[root@dgprim ~]# mkswap -c /u05/swapfile1

Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=f4094cb1-136b-4491-b13c-074d8b77e323

[root@dgprim ~]# swapon /u05/swapfile1


[root@dgprim ~]# swapon -s

Filename                   Type            Size    Used    Priority
/u05/swapfile1             file            2097148 0       -1

vi   /etc/fstab
Add these lines:
/u05/swapfile1          swap          swap    defaults        0 0

Check for swap size [root@dgprim ~]# free -m
             total       used       free     shared    buffers     cached
Mem:         11667       3725       7941          1        135       2811
-/+ buffers/cache:        778      10889
Swap:         2047          0       2047
Note : 
Now the swap size is increased.

Add Logical Partition to root in Linux

No comments :
Image result for root in linux file system

Add logical Partition to the root 

old size of root : 8GB 

add size of 30GB in root

[root@rac-node2 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                      6.5G  3.8G  2.4G  62% /
tmpfs                 4.8G     0  4.8G   0% /dev/shm
/dev/xvda1            477M   68M  380M  16% /boot
/dev/xvdb1             50G   52M   47G   1% /u01


1: Add Disk of 30GB


[root@rac-node2 ~]# fdisk -l


Disk /dev/xvdc: 32.2 GB, 32212254720 bytes

255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

2: run fdisk using /dev/xvdc


[root@rac-node2 ~]# fdisk /dev/xvdc


Command (m for help): n  [TYPE  n]

Command action
   e   extended
   p   primary partition (1-4)
      [TYPE  p]
Partition number (1-4): 1  [TYPE  1]
First cylinder (1-3916, default 1):  [ENTER]
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-3916, default 3916):  [ENTER]
Using default value 3916

Command (m for help): t  [TYPE  t]

Selected partition 1
Hex code (type L to list codes): 8e   [TYPE  8e]
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w  [TYPE  w]

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

3:
 [root@rac-node2 ~]# fdisk -l

Disk /dev/xvdc: 32.2 GB, 32212254720 bytes

255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x54aca0fe

    Device Boot      Start         End      Blocks   Id  System

/dev/xvdc1               1        3916    31455238+  8e  Linux LVM

4:
 [root@rac-node2 ~]# pvcreate /dev/xvdc1
  Physical volume "/dev/xvdc1" successfully created

5:
 [root@rac-node2 ~]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               7.51 GiB
  PE Size               4.00 MiB
  Total PE              1922
  Alloc PE / Size       1922 / 7.51 GiB
  Free  PE / Size       0 / 0
  VG UUID               Jb01NO-YpG9-jU7D-VBRr-vlOG-xWTY-TdhZKw

6:
 [root@rac-node2 ~]# vgextend VolGroup /dev/xvdc1
  Volume group "VolGroup" successfully extended

7:
 [root@rac-node2 ~]# pvscan
  PV /dev/xvda2   VG VolGroup   lvm2 [7.51 GiB / 0    free]
  PV /dev/xvdc1   VG VolGroup   lvm2 [30.00 GiB / 30.00 GiB free]
  Total: 2 [37.50 GiB] / in use: 2 [37.50 GiB] / in no VG: 0 [0   ]

8: [root@rac-node2 ~]# lvdisplay

  --- Logical volume ---
  LV Path                /dev/VolGroup/lv_root    [TAKE LV PATH]
  LV Name                lv_root
  VG Name                VolGroup
  LV UUID                m30nCz-swe1-VzRY-nfzD-mlbo-lZd9-e2ngDb
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2016-01-14 01:14:40 -0500
  LV Status              available
  # open                 1
  LV Size                6.71 GiB
  Current LE             1718
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

  --- Logical volume ---

  LV Path                /dev/VolGroup/lv_swap
  LV Name                lv_swap
  VG Name                VolGroup
  LV UUID                Leqw34-2ZiP-ohmc-CgvV-s6Qq-qrkF-iUkGdo
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2016-01-14 01:14:47 -0500
  LV Status              available
  # open                 2
  LV Size                816.00 MiB
  Current LE             204
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1

9:  

[root@rac-node2 ~]# lvextend /dev/VolGroup/lv_root /dev/xvdc1    [COMMAND]
  Size of logical volume VolGroup/lv_root changed from 6.71 GiB (1718 extents) to 36.71 GiB (9397 extents).
  Logical volume lv_root successfully resized

10:

[root@rac-node2 ~]# resize2fs /dev/VolGroup/lv_root   [COMMAND]
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/VolGroup/lv_root is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 3
The filesystem on /dev/VolGroup/lv_root is now 9622528 blocks long.

[root@rac-node2 ~]# df -h  [COMMAND]

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       37G  3.8G   31G  11% /
tmpfs                 4.8G     0  4.8G   0% /dev/shm
/dev/xvda1            477M   68M  380M  16% /boot
/dev/xvdb1             50G   52M   47G   1% /u01

Installation and Uninstallation of PostgreSQL Database Cluster on Linux

No comments :

+++++++++++++++++++++++++++++++++
Installation PostgreSQL Database Cluster
+++++++++++++++++++++++++++++++++
1.Check yum is install or not
# cd /etc/yum.repos.d
2. Check internet pinging or not
# ping www.google.com
3. Installation through YUM
3.1:  Go to the link and follow instructions
https://www.postgresql.org/download/linux/redhat/
4. Run the following command:
service postgresql-9.6 initdb #initialise postgresql 9.6 database
chkconfig postgresql-9.6 on #For autostart postgresql
service postgresql-9.6 start #Starting Postgresql cluster
5. Verify the services
ps -ef | grep postmaster
ps -ef | grep postgres #These will show running process
6. Connecting to the postgresql cluster database
# su - postgres
$ psql
/l #list the database
7. Stopping the database cluster
Service postgresql-9.6 stop
++++++++++++++++++++++++++++++++++
Uninstallation of  PostgreSQL Database Cluster
++++++++++++++++++++++++++++++++++
1. Start postgreSQL database cluster
service postresql-9.6 start
2. Connect to PostgreSQL Cluster and get the data dir. location by running following query:
# su - postgres
$ psql 
postgres=# 
postgres=# select s.name, s.setting, s.short_desc from pg_settings s where s.name='data_directory';
3. Connect to root user
$ su - root
4. Stop the running services
    # service postgresql-9.6 stop
5. Verify services are stopped or not
ps -ef | grep postmaster
ps -ef | grep postgres
6. Remove data directory
data dir location: /var/lib/pgsql/11/data
cd /var/lib/pgsql/
rm -rf 11
7. Remove user
# userdel postgrese