A Definitive Guide to Containers for App Developers

May 22, 2019 • Shannon Flynn


As a developer, you want your applications to be accessible just about anywhere, and by anyone. That means creating software that not only works cross-platform but across many instances and devices.

Often, this is difficult to achieve even across systems running the same type of operating system. Windows, for example, has many different versions and each application has to be optimized to work across those variants. That’s not including platforms such as Linux, OS X and more.

To get around this, you can opt to package your software in something called a container. They can be deployed across a variety of platforms and systems, and they include everything necessary to run the software package — even separate apps and services.

Containers are growing more popular in the development field for that reason. They’re super portable and lightweight, easy to use and don’t require any additional installation processes or dependencies — everything is packaged within.

What Is a Container?

A container — also referred to as a standardized unit of software — is essentially a complete packaging of all the necessary code and components to run an application. This is done specifically so the software can be run across various computing environments.

Inside the container are dependencies and libraries, software frameworks, multiple supporting apps and anything necessary to run a piece of software in a standalone form. Sometimes, it may even call for the inclusion of system tools, settings and runtime configurations.

Containers are available for Windows and Linux, with a select few that work with Mac OS X. In addition, containers can be used for cloud computing and data center setups, as well as serverless enterprise systems. The idea is that the software works the same anywhere, no matter what platform or environment it’s run from. That’s also why they’re often compared to virtual machines or virtualized systems.

In most cases, containers are incredibly lightweight and portable, but they also tend to be much more secure — provided their source is trustworthy.

Containers are exploding in the development community, popular because of their cross-platform nature and the added convenience they provide. With many enterprises turning to cloud computing systems, containers allow for usable software across all environments.

“Why doesn’t it work for them — it worked fine on my computer?”

“How come it doesn’t work in production? It worked fine on dev machines?”

Surely, you’ve heard comments like this at some point in your career. You understand the issue — differences in host machines and installations compared to local ones. Containers are one of a few different solutions for dealing with this sort of thing.

Several service providers explicitly specialize in container development and distribution, one of the most popular being Docker. In fact, Docker’s open-source platform was one of the first to introduce the idea of all-encompassing software to the mainstream. Container technology has existed for over ten years, however, built into Linux via LXC.

Why Containers?

Aside from the fact that containers are bundled with everything you need to run an application, there are several features they offer.

  • Unlike virtual machines, containers virtualize at the OS level which allows them to:
    • Remain more lightweight.
    • Share the OS kernel.
    • Start and begin operating faster.
    • Use minimal memory and resources.
  • Bundled software can be pre-configured, so all users have access to the same systems, apps and tools right from the get-go — including version-control to keep everyone on the same build
  • They provide consistent, predictable environments for users to work with
  • Can run anywhere, even across systems
  • Can be sandboxed like VMs for security and usability

Another inherent benefit of containers is their level of modularity. Unlike virtual machines which are incredibly complex, container-based applications ca break down into smaller modules. This introduces more of a micro approach where even the most complex of software applications are broken down bit by bit into more manageable segments.

For example, a change to code made within a single module only ever deals with that particular segment. You don’t have to rebuild the entire application piece-by-piece just to update a small portion. This makes both active development and continued maintenance a much faster and more efficient process.

Development then becomes about a microservice approach with minor adjustments and revisions as opposed to entire releases. With enterprise-level software applications and configurations, that’s incredibly important. Not only does it mean the platform in question can be ready to use faster, it means things stay more secure and functional even in the short term.

How Secure Are Containers?

It depends on who you ask, as some people believe containers are less secure than your average virtual machine. This is because of how containers are built and packaged. If the host kernel is compromised in any way, it means the entire package is then compromised. That’s a monumental security problem, as the container could then be installed on hundreds of machines, therefore, infecting them.

Except, over the last few years, as containers have become more popular, many things have been done to improve and secure their use. For example, most modern containers are now signed — similar to driver packages and other installation kits — to prevent non-trusted sources from showing up.

Furthermore, most containers providers like Docker scan every package for malicious code and vulnerabilities.

Other security tools have been released specifically for dealing with containers, like Twistlock. They are designed to scan each package and check for code, but also to verify sources. They’re regularly developed to work within the confines of a container, which are generally structured differently from conventional applications.

The reality is unless you get containers and content from an unreliable source, they are incredibly secure.

Will Containers Replace Virtual Machines and Virtualization?

Simply put, no, it’s unlikely containers will ever replace full-virtualization options. However, they are already used alongside VMs as an alternative, and in many cases, they are the better option depending on the software itself.

While many organizations will never abandon virtualized infrastructure investments, containers can be made to work with a lot of the same systems. VMWare encourages its customers to run containers within Photon OS.

Essentially, that means developers aren’t missing out by choosing to work with containers over some of the other options.

How Do Containers Work?

Containers are typically bundled into a package called an image, similar to a disc image. You may recognize that images are collections of software or code, which you can build in various ways. As with any image, they can be distributed on a wide scale to ensure everyone runs the same exact setup or configuration.

This is important during the development process where a long list of teams and individuals will work with and in an environment. Staging environments, maintenance crews, QA teams and more, can all use the same pre-packaged image.

You can also deploy natively to containers which is necessary for continuous integration. Every time a new branch is merged, a new image is compiled and distributed which includes any new or updated code. It’s a long-winded way of saying the state of the runtime environment you — or your team(s) — are working in can essentially become the code for your application.

The best way to make all this happen is to work with an existing container platform, Docker being the most obvious. Some alternatives include CoreOS rkt, Singularity, Nanobox, Apache Mesos, OpenVZ, containerd, LXC Linux containers and FreeBSD Jails.

Whatever form of container you choose, you’re likely to find them more prevalent over the coming years. The portability, versatility and reliability of the technology make them alluring for a variety of configurations, not the least of which is enterprise-level computing platforms.