Deployment topologies for high availability (ha)

14

Click here to load reader

Transcript of Deployment topologies for high availability (ha)

Page 1: Deployment topologies for high availability (ha)

Deployment topologies for High Availability (HA) with OpenStack

Page 2: Deployment topologies for high availability (ha)

Types of nodes

• OpenStack deployment needs to contain at least three types of nodes

– Endpoint node

– Controller node

– Compute node

– Cinder-volume

Page 3: Deployment topologies for high availability (ha)

• Endpoint node: This node runs load balancing and high availability services that may include load-balancing software and clustering applications. A dedicated load-balancing network appliance can serve as an endpoint node. A cluster should have at least two endpoint nodes configured for redundancy.

• Controller node: This node hosts communication services that support operation of the whole cloud, including the queue server, state database, Horizon dashboard, and possibly a monitoring system. This node can optionally host the nova-scheduler service and API servers load balanced by the endpoint node. At least two controller nodes must exist in a cluster to provide redundancy. The controller node and endpoint node can be combined in a single physical server, but it will require changes in configuration of the nova services to move them from ports utilized by the load balancer.

Page 4: Deployment topologies for high availability (ha)

• Compute node: This node hosts a hypervisor and virtual instances, and provides compute resources to them. The compute node can also serve as the network controller for instances it hosts, if a multihost network scheme is in use. It can also host non-demanding internal OpenStack services, like scheduler, glance-api, etc.

• Cinder Volume node: This is used if you want to use the Cinder-volume service. This node hosts the cinder-volume service and also serves as an iSCSI target.

Page 5: Deployment topologies for high availability (ha)

• typically hosts the load-balancing software or appliance providing even traffic distribution to OpenStack components and high availability—the controller and compute nodes can be set up in many different ways, ranging from “fat” controller nodes which host all the OpenStackinternal daemons

• compute nodes can take some of OpenStack’sinternal processing, by hosting API services and scheduler instances.

Page 6: Deployment topologies for high availability (ha)

Topology with a hardware load balancer

• In this deployment variation, the hardware load balancer appliance is used to provide a connection endpoint to OpenStack services

• API servers, schedulers, and instances of nova-scheduler are deployed on compute nodes

• glance-registry instances and Horizon are deployed on controller nodes.

Page 7: Deployment topologies for high availability (ha)

• All the native Nova components are stateless web services; this allows you to scale them by adding more instances to the pool

• we can safely distribute them across a farm of compute nodes. The database and message queue server can be deployed on both controller nodes in a clustered fashion

Page 8: Deployment topologies for high availability (ha)
Page 9: Deployment topologies for high availability (ha)

Topology with a dedicated endpoint node

• In this deployment configuration, we replace a hardware load balancer with an endpoint host that provides traffic distribution to a farm of services.

• Another major difference compared to the previous architecture is the placement of API services on controller nodes instead of compute nodes.

• Essentially, controller nodes have become “fatter” while compute nodes are “thinner.”

Page 10: Deployment topologies for high availability (ha)

• Also, both controllers operate in active/standby fashion.

• Controller node failure conditions can be identified with tools such Pacemaker and Corosync/Heartbeat

Page 11: Deployment topologies for high availability (ha)
Page 12: Deployment topologies for high availability (ha)

Topology with simple controller redundancy

• In this deployment, endpoint nodes are combined with controller nodes. API services and nova-schedulers are also deployed on controller nodes

• the controller can be scaled by adding nodes and reconfiguring HAProxy.

• Two instances of HAProxy are deployed to assure high availability, and detection of failures and promotion of a given HAProxy from standby to active can be done with tools such as Pacemakerand Corosync/Heartbeat.

Page 13: Deployment topologies for high availability (ha)
Page 14: Deployment topologies for high availability (ha)

Many ways to distribute services