On containers, serverless technology and contained resources
Containers are not a particularly new technology but they could be one that is really about to come into its own.
Containers have been a part of the open source Linux world for many years, and what they do is provide a means of getting software to run reliably when being moved from one computing environment to another. Containers followed by Dockers are used to isolate specific code, applications or processes, creating an insulated package for managing it and moving it across chosen hosts.
How do they work?
If you think of a virtual machine (VM) as basically taking apart a traditional server and separating it into multiple operating systems, what containers do is run on top the operating system (OS), so unlike a VM they don’t need an OS to boot up. Containers boot in a fraction of a second. Given that the vast majority of things that we do online today involves using an app based in the cloud, it seems like a progressive idea to have a mechanism that enables apps to run on any server or device, can be used and updated so quickly and also remain robust and stable.
Containers are also by definition far more lightweight – in a good sense - and agile than VMs. They require less memory space to operate and run under the host OS, while sharing particular operating system libraries and the OS’s kernel. Ultimately what we are talking about with containers is serverless and resource-based computing. With container technology becoming more mature we’re experiencing a paradigm shift, and it is very much about what comes next – this is the next stage in the evolution of virtualization.
Useful for IoT,Big Data and everything in between
The statistics and projections around the Internet of Things (IoT) continue to be nothing short of mindblowing. Forecasts include 26 billion connected objects by 2020. It is predicted to be a $6.2 trillion industry by 2025. As much as 94 percent of businesses say that they have already seen a return on investment on their machine-to-machine (M2M) spend. The IoT is here to stay but it needs help to deliver on these expected growths.
Much in the way that ten years ago virtualization came along and changed the server environment as we know it forever, laying down the path to cloud computing and cloud-enabled services, today containers are helping create a whole new ecosystem. On the back of the rise of DevOps, containers have complemented the demands of the IoT for customized software applications, designed specifically as independently deployable services. The use of containers has grown and they have made their way into the mainstream, to help enable the IoT in the Big Data era.
To give an example use case for containers in IoT, there are many wearable technology devices about now, and there are set to be more and more. Each day that people use their wearables, they come in in the morning and need to authenticate them. Traditionally authentication meant needing a lot of scripting or a big service, but now you can have a small container of technology that carries out the authentication far more quickly and conveniently. Verizon for example spins up 50,000 containers in a few seconds and then spins them back down again. This really contributes towards conserving resources. Concerning containers and Big Data, the sheer weight of data today traveling over networks needs careful management; if using Big Data software applications such as Hadoop to manage very large amounts of data, deploying containers can help enterprises control resources while still also controlling costs.
Containers create new possibilities for Big Data apps, but enabling organizations to make real-time changes to analytics apps and also move applications around dynamically and carry out data analysis more quickly. Essentially, containers enable the benefits of virtualization for Big Data apps while keeping data management as simple as possible.
How secure are containers?
This has been one of the ongoing concerns around containers – particularly because of the shared operating system – how secure can they possibly be? Overall containers didn’t use to be as secure as VMs were, but technology is getting more and more mature and containers are now secure and reliable. If there happens to be any vulnerability in the kernel that could potentially create an access point to the containers that are sharing it. Which in fairness is also true of a hypervisor, but because hypervisors by design offer less functionality than kernels, the area of potential attack is far smaller.
That said, I already know of one bank that is using containers to power its mobile banking service, so the case for container security seems to be being made. If a financial institution that is obsessed with security is utilizing containers in a production environment, I am sure others in the industry will start thinking about adopting it fast.
The future of virtualization
It is my belief that containers are here to stay and are likely to at least supersede virtual machines as we know them – if not be the end of them altogether. Just as VMs themselves did to physical machines last time around in fact.
Thanks to the rise of Big Data and the IoT we are constantly looking for new ways to do the computing we need to do better, more quickly, cheaply, and securely. Containers enable us to use an increasingly higher percentage of the CPU and this utilization means we lower costs and get greater ROI. They also help us deploy services at scale and better manage inter-dependencies, and even assist with hybrid cloud deployment strategies.
Want to know more about Docker and containerization? Read this paper by Orange Applications for Business and Orange Silicon Valley.
October 5, 2015
September 21, 2015
September 4, 2015