Over the years, developers have been faced with countless new technologies and tools all aiming to refine a part of the web application development cycle. Recently though, we’ve seen the rise of a new architecture which seems to be changing the way people think about computing altogether.
Docker and Containers
The Docker platform, and the ‘containerisation’ approach has recently seen an explosion in popularity and is being championed by many as the new standard in cloud software management. In fact, according to a survey conducted by DataDog, 2 out of 3 companies that consider Docker go on to adopt it (source).
So what is it? and how is it shaking up the world of web development?
Docker provides a tool-set that we can use to easily interact with Linux containers, a technology which has previously been reserved for those with extensive knowledge of its underlying principles and workings. In short, containers allow for the creation and running of isolated ‘user-space’ environments on top of the Linux kernel. This may seem like an alien concept, but it effectively means that we are able to provision and control several entirely separate Linux environments on a server without the need for virtual machines, or any kind of dedicated hardware management. As long as you have the hardware resources to support everything you want to do in your containers, theoretically, there are no limits.
This new approach to managing Linux environments provides a solution to many of the long-standing problems faced by developers and engineers during the development process
A standard environment
One of the biggest concerns during the development cycle is the plethora of issues that arise from differences between development and production environments. Traditionally, a great deal of time is spent conducting the tests necessary to ensure stability once the application is deployed. The use of containers for both development and production removes most of the uncertainty between different stages of the process, since the application’s environment effectively remains the same, wherever it is.
An advantage of the use of containers is the speed at which a functioning environment can be created. Whilst there does need to be a certain investment of time to create the initial images and build procedures, once in place, a working copy of an application can be up and running within seconds, compared with the many hours it would typically take to setup bare-metal server, or manage virtual machines. Also, in terms of raw performance, containers are the de facto winner over virtual machines, since there’s none of the extra overhead required by hypervisors and hardware emulation.
Perhaps the most obvious benefit of utilising containers is the almost limitless level of scalability it provides. From installing bare-metal servers, managing resources with virtual machines and hypervisors, through to the huge choice of cloud-hosting providers we have today, it’s the easiest it’s ever been to rapidly scale in response to usage. Containers are the next logical step forward, providing a standard which means you won’t need to invest time becoming an expert on each cloud-providers way of doing things. Almost all major cloud computing companies now offer their own container management/hosting service, meaning you can leave them to worry about the hardware, and you can concentrate on your application.
Monolith to Micro
Throughout the lifetime of an application, requirements can change and many new features may be required (read: will be required). Over this time an application’s code base is only going to get bigger, and become increasingly more complex due to all of these new additions and changes.
Regardless of how tidy the project is and how well it’s documented, large legacy code-bases are just harder to work with because there’s so much going on. In some aspects, the convenience of the monolithic architecture has historically been more attractive than the opposite approach of developing ‘micro-services’, due to the extra work required to orchestrate the separate parts. The arrival of Docker, and it’s large ecosystem of orchestration/management tools, has meant that the prospect of a fully micro-service based architecture is not as daunting as it once was – and those huge monolithic code-bases needn’t be the only option.