OpenAperture

Cloud Application Management Platform

OpenAperture is a free, open-source hybrid cloud management platform that delivers software quickly and consistently regardless of location or workload. This future-ready platform from Lexmark Enterprise Software provides a comprehensive management system to handle the six pillars of cloud management – provisioning, deployment, monitoring, maintenance, security and metering.


Deployable System Component Overview


Name Description
Manager Responsible for the REST HTTP endpoints (Phoenix framework) and database connection. This component requires an exposed port for the HTTP server (Cowboy)
Overseer Responsible for providing system health updates about the Etcd Clusters and various System Components. The Overseer continually monitors the system health of the cluster, specifically looking for the following information:
  • Heartbeat messages from all components in the assigned Exchange
  • Instances of the Fleet units that are in a failed state
  • Fleet units that have no active instances
  • Hosts that have a /var/lib/docker directory at >80% utilization
Notifications Responsible for posting HipChat, and email notifications for all System Components
WorkflowOrchestrator Responsible for coordinating Workflow activities (docker builds, Fleet deployments, etc...) across all system components, and exchanges
Builder Responsible for updating deployment repositories and executing docker builds. This component must have direct access to a second Etcd Cluster that will be used for executing remote docker builds
Deployer Responsible for executing Fleet deployments on application-specific Etcd clusters. This component must have direct (Fleet HTTP) access to the Etcd Cluster that will be hosting the application
Deployer OA This is the same component as Deployer, except it listens on a separate channel, designated specifically for upgrading OpenAperture System Components
Fleet Manager Responsible for executing Fleet commands in remote Etcd Clusters. This component must have direct (Fleet HTTP & SSH) access to the Etcd Cluster that is being queried

System Component Assemblies


In order for the components to communicate correctly, groups of System Components (called Assemblies) must be running in the same Exchange. Multiple assemblies may run in the same Exchange.

In order to deploy an assembly into a distributed environment, the primary Deploy assembly must have Fleet access to the deployment cluster. After the initial deployment, you can run a backup Deployment assembly in the distributed environment for self-upgrades.

A RabbitMQ server should be created for each distributed assembly and (ideally) be added to the master Federated RabbitMQ mesh.


Management Assembly

The Management Assembly hosts the System Components responsible for running the RESTful interface (only external interface into the system), and coordinating messages between exchanges.

There should be only 1 primary Management Assembly. A failover assembly should point to the same db (or failover db instance).

This cluster is critical as it does docker host cluster Exchange resolution (for pending builds), and deployment cluster Exchange resolution (for deploys).


Build Assembly

The Build Assembly hosts the System Components responsible for updating Deployment configuration (i.e. Github pulls/pushes) and executing docker builds.

Note that the Builder assumes the docker build slave cluster is directly accessible (i.e. for DOCKER_HOST).


Deployment Assembly

Note that the Deployer assumes the deployment cluster is directly accessible (i.e. Fleet commands).