You need to enter the Virtual IP and Virtual host name during a clusterware installation. These information are stored in the OCR and different components within RAC depends on these VIPs.
If for any reason you want to change the VIP, you may do it using ifconfig and srvctl utilities.
The following steps are performed to change the VIP address.
Step 1: Confirm the current IP address for the VIP
$ ifconfig - a
Step 2: Stop all the resources that are dependent on the VIP on that particular node.
$ srvctl stop instance -d DB -i db1
$ srvctl stop asm -n node1
$ su - root
# srvctl stop nodeapps -n node1
Step 3: Verify that VIP is no longer running.
$ ifconfig -a
The output must be blank. If any interface is listed, it means there are components that is dependant on VIP is still running. Stop those resources.
Step 4: Change /etc/hosts file
Edit the /etc/hosts file with the new ip address and hostname.
Step 5: Modify the nodeapps and provide the new VIP address using srvctl.
$ su -root
# srvctl modify nodeapps -n node1 -A 192.168.4.41/255.255.255.0/eth0
where 192.168.4.41 is the new IP address and 255.255.255.0 is the subnet mask
and eth0 is the interface you want your VIP to use.
Step 6: Start the nodeapps again
# srvctl start nodeapps -n node1
Step 7: Repeat step 1 -6 on the other available nodes.
Step 8: Modify the IP address in the tnsnames.ora and listener.ora files.
Thanks
Showing posts with label Clusterware. Show all posts
Showing posts with label Clusterware. Show all posts
Wednesday, June 1, 2011
Monday, May 23, 2011
Cluster Verification Utility CLUVFY
CLUVFY utility is distributed with Oracle Clusterware. It is used to assist in the installation and configuration of Oracle Clusterware as well as RAC. It helps in verifying whether all the components that are required for successful installation of Clusterware and RAC are installed and configured correctly.
The CLUVFY commands are divided in to two categories,
1. Stage Commands
2. Component Commands
Stage Commands:
There are various phases during clusterware or RAC deployment, for example, hardware and software cofiguration, CRS installation, RAC software installation, Database creation etc...Each of these phases is called a stage. Each stage requires a pre-requisite conditions to be met before entering the stage (pre check) and another set of conditions to be met after the completion of that stage (post check)..
The pre-check verification and post check verification can be done using the CLUVFY commands. The commands used to perform these pre-check and post-check are called stage commands. To identify various stages use the following command,
$ cd ORA_CRS_HOME/bin
$ cluvfy stage -list
post hwos - Post check for hardware and Operating System
Pre cfs - Pre check for CFS (optional)
post cfs - Post check for CFS (optional)
pre crsinst - pre check for clusterware installation
post crsinst - post check for clusterware installation
pre dbinst - pre check for database installation
pre dbcfg - pre check for database configuration
Component Commands:
The commands in this category is used to verify the correctness of an individual cluster components and not associated with any stages. The various cluster components are listed using the following command,
$ cd $ORA_CRS_HOME/bin
$ cluvfy comp -list
nodereach - checks reachability between nodes
nodecon - checks node connectivity
cfs - checks cfs integrity
ssa - checks shared storage accessibility
space - checks space availability
sys - checks minimum system requirements
clu - checks cluster integrity
clumgr - checks cluster manager integrity
ocr - checks ocr integrity
nodeapp - checks existence of node applications
admprv - checks administrative privileges
peer - Compares properties with peers
Thanks
The CLUVFY commands are divided in to two categories,
1. Stage Commands
2. Component Commands
Stage Commands:
There are various phases during clusterware or RAC deployment, for example, hardware and software cofiguration, CRS installation, RAC software installation, Database creation etc...Each of these phases is called a stage. Each stage requires a pre-requisite conditions to be met before entering the stage (pre check) and another set of conditions to be met after the completion of that stage (post check)..
The pre-check verification and post check verification can be done using the CLUVFY commands. The commands used to perform these pre-check and post-check are called stage commands. To identify various stages use the following command,
$ cd ORA_CRS_HOME/bin
$ cluvfy stage -list
post hwos - Post check for hardware and Operating System
Pre cfs - Pre check for CFS (optional)
post cfs - Post check for CFS (optional)
pre crsinst - pre check for clusterware installation
post crsinst - post check for clusterware installation
pre dbinst - pre check for database installation
pre dbcfg - pre check for database configuration
Component Commands:
The commands in this category is used to verify the correctness of an individual cluster components and not associated with any stages. The various cluster components are listed using the following command,
$ cd $ORA_CRS_HOME/bin
$ cluvfy comp -list
nodereach - checks reachability between nodes
nodecon - checks node connectivity
cfs - checks cfs integrity
ssa - checks shared storage accessibility
space - checks space availability
sys - checks minimum system requirements
clu - checks cluster integrity
clumgr - checks cluster manager integrity
ocr - checks ocr integrity
nodeapp - checks existence of node applications
admprv - checks administrative privileges
peer - Compares properties with peers
Thanks
Labels:
Clusterware,
Cluvfy
Friday, May 20, 2011
Clusterware Log Files
In this post we will see where Oracle Clusterware stores its component log files, these files help in diagnostic information collection and problem analysis.
All clusterware log files are stored under $ORA_CRS_HOME/log/ directory.
1. alert<nodename>.log : Important clusterware alerts are stored in this log file. It is stored in $ORA_CRS_HOME/log/<hostname>/alert<hostname$gt;.log.
2. crsd.log : CRS logs are stored in $ORA_CRS_HOME/log/<hostname>/crsd/ directory. The crsd.log file is archived every 10MB as crsd.101, crsd.102 ...
3. cssd.log : CSS logs are stored in $ORA_CRS_HOME/log/<hostname>/cssd/ directory. The cssd.log file is archived every 20MB as cssd.101, cssd.102....
4. evmd.log : EVM logs are stored in $ORA_CRS_HOME/log/<hostname>/evmd/ directory.
5. OCR logs : OCR logs (ocrdump, ocrconfig, ocrcheck) log files are stored in $ORA_CRS_HOME/log/<hostname>/client/ directory.
6. SRVCTL logs: srvctl logs are stored in two locations, $ORA_CRS_HOME/log/<hostname>/client/ and in $ORACLE_HOME/log/<hostname>/client/ directories.
7. RACG logs : The high availability trace files are stored in two locations
$ORA_CRS_HOME/log/<hostname>/racg/ and in $ORACLE_HOME/log/<hostname>/racg/ directories.
RACG contains log files for node applications such as VIP, ONS etc.
Each RACG executable has a sub directory assigned exclusively for that executable.
racgeut : $ORA_CRS_HOME/log/<hostname>/racg/racgeut/
racgevtf : $ORA_CRS_HOME/log/<hostname>/racg/racgevtf/
racgmain : $ORA_CRS_HOME/log/<hostname>/racg/racgmain/
racgeut : $ORACLE_HOME/log/<hostname>/racg/racgeut/
racgmain: $ORACLE_HOME/log/<hostname>/racg/racgmain/
racgmdb : $ORACLE_HOME/log/<hostname>/racg/racgmdb/
racgimon: $ORACLE_HOME/log/<hostname>/racg/racgimon/
In that last directory imon_<service>.log is archived every 10MB for each service.
Thanks
All clusterware log files are stored under $ORA_CRS_HOME/log/
1. alert<nodename>.log : Important clusterware alerts are stored in this log file. It is stored in $ORA_CRS_HOME/log/<hostname>/alert<hostname$gt;
2. crsd.log : CRS logs are stored in $ORA_CRS_HOME/log/<hostname>
3. cssd.log : CSS logs are stored in $ORA_CRS_HOME/log/<hostname>
4. evmd.log : EVM logs are stored in $ORA_CRS_HOME/log/<hostname>
5. OCR logs : OCR logs (ocrdump, ocrconfig, ocrcheck) log files are stored in $ORA_CRS_HOME/log/<hostname>
6. SRVCTL logs: srvctl logs are stored in two locations, $ORA_CRS_HOME/log/<hostname>
7. RACG logs : The high availability trace files are stored in two locations
$ORA_CRS_HOME/log/<hostname>
RACG contains log files for node applications such as VIP, ONS etc.
Each RACG executable has a sub directory assigned exclusively for that executable.
racgeut : $ORA_CRS_HOME/log/<hostname>
racgevtf : $ORA_CRS_HOME/log/<hostname>
racgmain : $ORA_CRS_HOME/log/<hostname>
racgeut : $ORACLE_HOME/log/<hostname>
racgmain: $ORACLE_HOME/log/<hostname>
racgmdb : $ORACLE_HOME/log/<hostname>
racgimon: $ORACLE_HOME/log/<hostname>
In that last directory imon_<service>
Thanks
Labels:
Administration,
Clusterware
Tuesday, May 17, 2011
Diagcollection.pl
Diagcollection.pl is a script used to collect the diagnostic information from clusterware installation. The script provides you with additional information so that the Oracle Support can resolve problems.
Invoking diagcollection script
Step 1: Log in as Root
Step 2: Set up the following environment variables
# export ORACLE_BASE= /..../
# export ORACLE_HOME = /..../
# export ORA_CRS_HOME = /.../
Step 3: Run the script
# cd $ORA_CRS_HOME/bin
# ./diagcollection.pl -collect
The script generates the following files in the local directory,
basData_.tar.gz (contains logfiles from ORACLE_BASE/admin)
crsData_.tar.gz (logs from $ORA_CRS_HOME/log/)
ocrData_.tar.gz (results of ocrcheck, ocrdump and ocr backups)
oraData_.tar.gz (logs from $ORACLE_HOME/log/)
To collect only subset of log files , you can invoke as follows,
# ./diagcollection.pl -collect -crs (CRS log files)
# ./diagcollection.pl -collect -oh (ORACLE_HOME logfiles)
# ./diagcollection.pl -collect -ob (ORACLE_BASE logfiles)
# ./diagcollection.pl -collect -all (default)
To clean out the files generated from the last run
# ./diagcollection.pl -clean
To extract only the core files found in the generated files and store it in a text file,
# ./diagcollection.pl -coreanalyze
Thanks
Invoking diagcollection script
Step 1: Log in as Root
Step 2: Set up the following environment variables
# export ORACLE_BASE= /..../
# export ORACLE_HOME = /..../
# export ORA_CRS_HOME = /.../
Step 3: Run the script
# cd $ORA_CRS_HOME/bin
# ./diagcollection.pl -collect
The script generates the following files in the local directory,
basData_
crsData_
ocrData_
oraData_
To collect only subset of log files , you can invoke as follows,
# ./diagcollection.pl -collect -crs (CRS log files)
# ./diagcollection.pl -collect -oh (ORACLE_HOME logfiles)
# ./diagcollection.pl -collect -ob (ORACLE_BASE logfiles)
# ./diagcollection.pl -collect -all (default)
To clean out the files generated from the last run
# ./diagcollection.pl -clean
To extract only the core files found in the generated files and store it in a text file,
# ./diagcollection.pl -coreanalyze
Thanks
Labels:
Clusterware,
Performance Tuning
Subscribe to:
Posts (Atom)