APPLIES TO:

Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Oracle Database Backup Service - Version N/A and later
Oracle Database Cloud Exadata Service - Version N/A and later
Oracle Database Cloud Service - Version N/A and later
Information in this document applies to any platform.

SYMPTOMS

While trying to start clusterware, ASM was not able to start. Errors in the CRS alert<hostname>.log contains:


2017-06-25 22:23:30.928:
[/u01/app/11.2.0/grid_1/bin/oraagent.bin(6363)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/app/11.2.0/grid_1/log/<node>/agent/ohasd/oraagent_grid/oraagent_grid.log"
2017-06-25 22:24:24.997:
[/u01/app/11.2.0/grid_1/bin/oraagent.bin(6363)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/app/11.2.0/grid_1/log/<node>/agent/ohasd/oraagent_grid/oraagent_grid.log"
2017-06-25 22:24:30.183:
[/u01/app/11.2.0/grid_1/bin/oraagent.bin(6363)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/app/11.2.0/grid_1/log/<node>/agent/ohasd/oraagent_grid/oraagent_grid.log"
2017-06-25 22:24:35.596:
[/u01/app/11.2.0/grid_1/bin/oraagent.bin(6363)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/app/11.2.0/grid_1/log/<node>/agent/ohasd/oraagent_grid/oraagent_grid.log"


and while trying to start using sqlplus, the following is displayed:


$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 26 08:30:34 2017

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:if_not_found failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxpvaddr9
ORA-27303: additional information: requested interface 172.xx.xx107 not found. Check output from ifconfig command
SQL>




CHANGES

 Problems with hardware, ending up changing server, and kept previous spfile configuration for ASM

CAUSE

 ASM instance had in the spfile, as displayed in the alert_+ASM3.log, the following information for cluster_interconnects:



/file: alert_+ASM3.log


Using parameter settings in server-side spfile +DATA/<cluster>/asmparameterfile/registry.253.947555425 >>>>>>>>>> spfile
System parameters with non-default values:
sga_max_size = 272M
large_pool_size = 12M
instance_type = "asm"
cluster_interconnects = "172.xx.xx.107" >>>>>>>>>>>>>> cluster_interconnects = "172.xx.xx.107" <<<<<<<<<<<<<<<<<<< private ip address defined in spfile
sga_target = 0
memory_target = 1G
memory_max_target = 1G
remote_login_passwordfile= "EXCLUSIVE"


 
The ifconfig -a output from the node where ASM3 is supposed to be running, did not had any network card with the ip address 172.xx.xx107, but HAIP was already up and plumbed into private interface:
 


$ ifconfig -a

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
ipmp0: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 1500 index 4
inet 172.xx.xx.63 netmask ffffff00 broadcast 172.xx.11.255
groupname ipmp0
ipmp1: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 1500 index 2
inet 172.xx.xx.113 netmask ffffff00 broadcast 172.xx.18.255
groupname ipmp1
ipmp1:1: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 1500 index 2                            >>>>>>>>>>>>>>> HAIP is running
inet 169.xxx.xx.107 netmask ffff0000 broadcast 169.xxx.255.255
net0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
inet 0.0.0.0 netmask ff000000 broadcast 0.xxx.255.255
groupname ipmp0
net1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask ff000000 broadcast 0.xxx.255.255
groupname ipmp1
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
ipmp0: flags=28002000840<RUNNING,MULTICAST,IPv6,IPMP> mtu 1500 index 4
inet6 ::/0
groupname ipmp0
ipmp1: flags=28002000840<RUNNING,MULTICAST,IPv6,IPMP> mtu 1500 index 2
inet6 ::/0
groupname ipmp1
net0: flags=20002000841<UP,RUNNING,MULTICAST,IPv6> mtu 1500 index 5
inet6 ::/0
groupname ipmp0
net1: flags=20002000841<UP,RUNNING,MULTICAST,IPv6> mtu 1500 index 3
inet6 ::/0
groupname ipmp1

 The /etc/hosts file had another entry for the private interface of this node, and not the old ip address of 172.xx.xx.107:


$ cat /etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost loghost
..
#Private
172.xx.xx.106 <node1>-priv <node1>-priv.<domain>
172.xx.xx.107 <node2>-priv <node2>-priv.<domain>
172.xx.xx.113 <node3>-priv <node3>-priv.<domain>                  >>>>>>>>>>>>>>> HERE

 
 

SOLUTION

1) remove from spfile this entry

*.cluster_interconnects='172.xx.xx.107'

by issuing:

SQL> alter system reset cluster_interconnects SCOPE=SPFILE SID='*';

System altered.

SQL>

2) restart asm



 

  • No labels
Write a comment…