Last week we restarted our production server, after that we were unable to start the ASM instance. When we tried to start the ASM instance we got the following error,
ORA-29701: Unable to connect to Cluster Manager.
The reason is Oracle cssd daemon process was not running.
Workaround:
Login to the server as root user
Step 1: Go to the /etc folder
Step 2: Execute the following command.
# init.cssd start
startup will be queued to init within 30 seconds
Step 3: Wait for 2 minutes and execute the following commands
# cd $ORACLE_HOME/bin
# ./localconfig delete
Stopping CSSD
Failure in CSS initialization opening OCR.
Shutdown has begun. The daemons should exit soon.
Step 4: Execute the following command
# ./localconfig add
Successfully accumulated necessary OCR keys.
Creating OCR keys for user root, privgrp 'system'
Operation Successful
Configuration for local css has been initialized.
Adding to inittab
Startup will be queued to init within 30 seconds
Checking the status of new oracle init process..
Expecting the CRS daemons to be up within 600 seconds
CSS is active on these nodes
nodelist
CSS is active on all nodes
Oracle CSS service is installed and running under init(1M)
Step 5: Start the ASM instance now.
Thanks
Wednesday, June 29, 2011
ORA-29701: Unable to connect to Cluster Manager
Labels:
Errors
Friday, June 3, 2011
Convert Archivelog Mode to NoArchivelog Mode in RAC
Step 1: Set the Environment Variable
$ hostname
node1
$ export ORACLE_SID=gcprod1
$ sqlplus system @ gcprod1
password:
Step 2: Disable Cluster
SQL> alter system set CLUSTER_DATABASE=FALSE scope=spfile;
system altered
Step 3: Set the parameters (Optional)
SQL> alter system set LOG_ARCHIVE_START= FALSE scope=spfile;
system altered
Step 4: Shutdown the database
SQL> exit
$ srvctl stop database -d gcprod
Step 5: Mount the database
$ echo $ORACLE_SID
gcprod1
$ sqlplus "sys as sysdba"
SQL> startup mount
ORACLE instance started
Total System Global Area: 4.4678E+10 bytes
Fixed Size 2166864 bytes
Variable Size 4345302960 bytes
Database mounted.
Step 6: Disable Archive log
SQL> alter database noarchivelog;
Step 7: Enable the cluster
SQL> alter system set CLUSTER_DATABASE=TRUE scope=spfile;
system altered
Step 8: Shutdown the instance
SQL> shutdown immediate;
Step 9: Start all the instances
$ srvctl start database -d gcprod
Step 10: verify the status.
$ crs_stat -t
all instances must be online
$ sqlplus "sys as sysdba"
SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 308
Next log sequence to archive 309
Current log sequence 309
Your database is in No Archive Log Mode.
$ hostname
node1
$ export ORACLE_SID=gcprod1
$ sqlplus system @ gcprod1
password:
Step 2: Disable Cluster
SQL> alter system set CLUSTER_DATABASE=FALSE scope=spfile;
system altered
Step 3: Set the parameters (Optional)
SQL> alter system set LOG_ARCHIVE_START= FALSE scope=spfile;
system altered
Step 4: Shutdown the database
SQL> exit
$ srvctl stop database -d gcprod
Step 5: Mount the database
$ echo $ORACLE_SID
gcprod1
$ sqlplus "sys as sysdba"
SQL> startup mount
ORACLE instance started
Total System Global Area: 4.4678E+10 bytes
Fixed Size 2166864 bytes
Variable Size 4345302960 bytes
Database mounted.
Step 6: Disable Archive log
SQL> alter database noarchivelog;
Step 7: Enable the cluster
SQL> alter system set CLUSTER_DATABASE=TRUE scope=spfile;
system altered
Step 8: Shutdown the instance
SQL> shutdown immediate;
Step 9: Start all the instances
$ srvctl start database -d gcprod
Step 10: verify the status.
$ crs_stat -t
all instances must be online
$ sqlplus "sys as sysdba"
SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 308
Next log sequence to archive 309
Current log sequence 309
Your database is in No Archive Log Mode.
Labels:
Backup,
Configuration
Configure Archivelog in RAC Database
Steps to configure ARCHIVELOG mode from NOARCHIVELOG mode.
Database name: gcprod
Instance 1: gcprod1
Instance 2: gcprod2
Step 1: Set the Environment Variable
$ hostname
node1
$ export ORACLE_SID=gcprod1
$ sqlplus system @ gcprod1
password:
Step 2: Disable Cluster
SQL> alter system set CLUSTER_DATABASE=FALSE scope=spfile;
system altered
Step 3: Set the parameters
SQL> alter system set LOG_ARCHIVE_FORMAT= 'gcprod_%t_%s_%r.arc' scope=spfile;
system altered
Step 4: Shutdown the database
SQL> exit
$ srvctl stop database -d gcprod
Step 5: Mount the database
$ echo $ORACLE_SID
gcprod1
$ sqlplus "sys as sysdba"
SQL> startup mount
ORACLE instance started
Total System Global Area: 4.4678E+10 bytes
Fixed Size 2166864 bytes
Variable Size 4345302960 bytes
Database Buffers 4.0316E+10 bytes
Redo Buffers 14606336 bytes
Database mounted.
Step 6: Enable Archive log
SQL> alter database archivelog;
Step 7: Enable the cluster
SQL> alter system set CLUSTER_DATABASE=TRUE scope=spfile;
system altered
Step 8: Shutdown the instance
SQL> shutdown immediate;
Step 9: Start all the instances
$ srvctl start database -d gcprod
Step 10: verify the status.
$ crs_stat -t
all instances must be online
$ sqlplus "sys as sysdba"
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 308
Next log sequence to archive 309
Current log sequence 309
Database name: gcprod
Instance 1: gcprod1
Instance 2: gcprod2
Step 1: Set the Environment Variable
$ hostname
node1
$ export ORACLE_SID=gcprod1
$ sqlplus system @ gcprod1
password:
Step 2: Disable Cluster
SQL> alter system set CLUSTER_DATABASE=FALSE scope=spfile;
system altered
Step 3: Set the parameters
SQL> alter system set LOG_ARCHIVE_FORMAT= 'gcprod_%t_%s_%r.arc' scope=spfile;
system altered
Step 4: Shutdown the database
SQL> exit
$ srvctl stop database -d gcprod
Step 5: Mount the database
$ echo $ORACLE_SID
gcprod1
$ sqlplus "sys as sysdba"
SQL> startup mount
ORACLE instance started
Total System Global Area: 4.4678E+10 bytes
Fixed Size 2166864 bytes
Variable Size 4345302960 bytes
Database Buffers 4.0316E+10 bytes
Redo Buffers 14606336 bytes
Database mounted.
Step 6: Enable Archive log
SQL> alter database archivelog;
Step 7: Enable the cluster
SQL> alter system set CLUSTER_DATABASE=TRUE scope=spfile;
system altered
Step 8: Shutdown the instance
SQL> shutdown immediate;
Step 9: Start all the instances
$ srvctl start database -d gcprod
Step 10: verify the status.
$ crs_stat -t
all instances must be online
$ sqlplus "sys as sysdba"
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 308
Next log sequence to archive 309
Current log sequence 309
Labels:
Backup,
Configuration
Wednesday, June 1, 2011
Change VIP address
You need to enter the Virtual IP and Virtual host name during a clusterware installation. These information are stored in the OCR and different components within RAC depends on these VIPs.
If for any reason you want to change the VIP, you may do it using ifconfig and srvctl utilities.
The following steps are performed to change the VIP address.
Step 1: Confirm the current IP address for the VIP
$ ifconfig - a
Step 2: Stop all the resources that are dependent on the VIP on that particular node.
$ srvctl stop instance -d DB -i db1
$ srvctl stop asm -n node1
$ su - root
# srvctl stop nodeapps -n node1
Step 3: Verify that VIP is no longer running.
$ ifconfig -a
The output must be blank. If any interface is listed, it means there are components that is dependant on VIP is still running. Stop those resources.
Step 4: Change /etc/hosts file
Edit the /etc/hosts file with the new ip address and hostname.
Step 5: Modify the nodeapps and provide the new VIP address using srvctl.
$ su -root
# srvctl modify nodeapps -n node1 -A 192.168.4.41/255.255.255.0/eth0
where 192.168.4.41 is the new IP address and 255.255.255.0 is the subnet mask
and eth0 is the interface you want your VIP to use.
Step 6: Start the nodeapps again
# srvctl start nodeapps -n node1
Step 7: Repeat step 1 -6 on the other available nodes.
Step 8: Modify the IP address in the tnsnames.ora and listener.ora files.
Thanks
If for any reason you want to change the VIP, you may do it using ifconfig and srvctl utilities.
The following steps are performed to change the VIP address.
Step 1: Confirm the current IP address for the VIP
$ ifconfig - a
Step 2: Stop all the resources that are dependent on the VIP on that particular node.
$ srvctl stop instance -d DB -i db1
$ srvctl stop asm -n node1
$ su - root
# srvctl stop nodeapps -n node1
Step 3: Verify that VIP is no longer running.
$ ifconfig -a
The output must be blank. If any interface is listed, it means there are components that is dependant on VIP is still running. Stop those resources.
Step 4: Change /etc/hosts file
Edit the /etc/hosts file with the new ip address and hostname.
Step 5: Modify the nodeapps and provide the new VIP address using srvctl.
$ su -root
# srvctl modify nodeapps -n node1 -A 192.168.4.41/255.255.255.0/eth0
where 192.168.4.41 is the new IP address and 255.255.255.0 is the subnet mask
and eth0 is the interface you want your VIP to use.
Step 6: Start the nodeapps again
# srvctl start nodeapps -n node1
Step 7: Repeat step 1 -6 on the other available nodes.
Step 8: Modify the IP address in the tnsnames.ora and listener.ora files.
Thanks
Labels:
Clusterware
Subscribe to:
Posts (Atom)