Open source software is currently witnessing a renaissance. For every major platform out there, there is at least one open source alternative. Heroku made Platform-as-a-Service (PaaS) popular and is still one of the best PaaS providers out there with hundreds of integrations to choose from. But Heroku can cost a lot more than you expect as your application grows in scale. Some popular OSS alternatives are compared below:
Dokku presents itself as “Docker powered mini-Heroku.” It is the smallest and simplest of all the options reviewed here. It comes with a web-based setup (after the initial installation) or can be set up in an unattended mode as well, which is suitable for deployment scripts and CI/CD. Heroku-compatible applications can be pushed to it via Git and it builds them using Heroku build packs. The only downside is that Dokku is limited to a single host. While it works for small applications like side-projects, the lack of horizontal scalability makes it unsuitable for larger applications. Dokku is also not able to provide high availability because a single server means a single point of failure. This is where Flynn comes in.
[easy-tweet tweet=”Large applications are moving towards microservices from monolithic designs” hashtags=”Applications, Data”]
Large applications are increasingly moving towards microservices architecture from monolithic designs. Services are divided as per their functionality and are often a part of processing pipelines. There are even services for those services. For example analytics from various sources can be clustered using an aggregator. Microservices give the developer the ability to scale parts of their application independent of each other as per the load. Dokku is unsuitable for this kind of applications as they require a platform that can support multi-cluster deployments.
Flynn was built with keeping high availability and scalability in mind. It can be run on a single server or can be scaled up to multiple nodes. Like Dokku, Flynn also works on the same Heroku-like format. Applications are deployed using Git and built using build packs. Flynn’s components also run inside a cluster as highly available Flynn apps. Flynn also includes built-in databases that run on a cluster. PostgreSQL, MongoDB, MySQL, or Redis can be initialized with a single click, and it also provides console access to the CLI clients of these databases. Flynn provides a web-based dashboard for monitoring and administering the cluster. It also shows aggregated logs from all the nodes there. HTTP, HTTPS, and TCP load balancing is built-in and automatically configured. Flynn also provides overlay networking for scaling applications and built-in service discovery, so you do not need to configure Consul service yourself. Both Flynn and Dokku support Twelve-factor applications, making all Dokku applications compatible with Flynn. Flynn has support for external plugins to extend it. It is built on an Ubuntu base. All in all, Flynn is a solid platform for developing scalable applications in a Heroku-like environment on your server. It can be seen as a successor of Dokku.
Deis Workflow is built on top of the battle-tested Kubernetes. It adds an easy to use the layer on top of a Kubernetes cluster to make application deployments easier. Deis Workflow is most popular among the three platforms discussed for large and complex applications. The platform is delivered as a set of Kubernetes microservices. Both the platform services and the application runs in separate namespaces to separate the workload. Deis Workflow can deploy new versions of the application without any downtime using its services and replication controller. As with Dokku and Flynn, it also supports deployment via Git. It can be controlled via the CLI client or using the built-in REST API. It also includes an edge router to enforce firewall policy in the cluster. All the code pushes, config changes and scaling events are tracked and Deis Workflow makes it easy to rollback to any previous version with a simple API call. Additional workload which is not managed by Workflow can be added using the underlying Kubernetes’s service discovery. Using Deis Workflow requires initializing a Kubernetes cluster, which makes it a little less beginner friendly – although Google Cloud Platform, Amazon AWS, and Azure Container Services all provide easy to use managed Kubernetes set-ups. Once the cluster has been initialized, it can be installed with Deis Helm package manager. Deis Workflow also follows the principle of Twelve-factor applications. It can either use a build pack or create a new docker image if a Dockerfile is found. Workflow is also built for scalability. It is built on a CoreOS base. For CAP, it uses Fleet and etcd for Gossip/RAFT from the underlying CoreOS. Logs and metrics can be drained to any supported sink. Deis Workflow also provides support for in-built alerting based on predefined thresholds. Alerts can be sent to a Slack channel, Pagerduty, to a custom webhook, or via email.
Other notable mentions are PaaS like Cloud Foundry BOSH and OpenStack Solum. Choosing the one that fits your application best can be a strenuous task. The best way to do this would be to try out each of the platforms, starting with the Dokku, which is the simplest. If the application requires more flexibility and scalability, it would be better to move on to Flynn at the cost of some increased complexity. DevOps engineers who are already familiar with Kubernetes will find transitioning to Deis Workflow very smooth.