-
Type:
Bug
-
Status: Open (View Workflow)
-
Priority:
Major
-
Resolution: Unresolved
-
Affects Version/s: 1.10.0
-
Fix Version/s: None
-
Component/s: Platform
-
Labels:
-
Environment:
docker kubernetes swarm
-
Story Points:1
When using onos under a container management system, such as kubernetes, where the IPs of the given instances are not known a priori, it is important to be able to start with a cluster configuration that is empty. The number of partitions can be specified as that is static, but the node list will initially be empty, such as
{ "nodes": [], "name": 3820012610, "partitions": [ { "id": 1, "members": [] }, { "id": 2, "members": [] }, { "id": 3, "members": [] } ] }
As ONOS instances are instantiated / terminated the configuration will be update, assuring 2N+1 and the same number of partitions.
Today, simply put, this does not work. For example, once an instance is started with an empty config.json it essentially gets wedged and won't every get to a happy place even when the config is updated to a single node cluster as below. There are lots of exceptions in the log including java.lang.IllegalStateException: Unable to determine local ip.
{ "nodes": [ { "ip": "10.40.0.3", "id": "10.40.0.3", "port": 9876 } ], "name": 3733144811, "partitions": [ { "id": 1, "members": [ "10.40.0.3" ] }, { "id": 2, "members": [ "10.40.0.3" ] }, { "id": 3, "members": [ "10.40.0.3" ] } ] }
at this point the instance seems to stay in the ACTIVE state and never transitions to the READY state.
onos> nodes id=10.40.0.3, address=10.40.0.3:9876, state=ACTIVE, updated=11s ago *
To reproduce see : https://github.com/davidkbainbridge/k8s-playground/tree/onos-test/onos