The Massachusetts Institute of Technology's Technology Review published a couple of interesting articles discussing the future of cloud computing, and how architectures may need to evolve to enable customers to fully exploit the potential of the technology.
New coding models
It was noted that the code used to control computers in the cloud is "surprisingly clunky", with programming languages not designed to handle multiple computers or dispersed data sources. While there are some software frameworks available intended to aid this, Technology Review said that "there's room to make the process more efficient".
The issue is that much current software instructs the processing computer to take certain actions in a certain order so as to complete tasks. But the advantage of the cloud is that tasks can be split-up and distributed across multiple computers and undertaken simultaneously -- which is hard to support with most programming languages, and results in bloated software.
The University of California is undertaking a project called BOOM, which is designed to develop new techniques for programming the cloud. It said its goal is "to allow the essence of a distributed system to be concisely specified in a programmer-friendly, declarative language", using a distributed protocol which is "easy to understand, reason about, and modify". This also leverages existing database technologies, which are already configured to support batch processing, with tasks able to be split and undertaken in any order.
Linking the clouds
Vinton Cerf, often proclaimed "the father of the internet", and Chief Internet Evangelist at Google, argued that "we need to start developing interfaces so that clouds can communicate directly among themselves". Currently, cloud computing solutions can communicate seamlessly with users, but not with each other.
Cerf said that support for interoperable clouds will offer a number of advantages to end-users. Users may wish to transfer data from one service to another without having to go through a process of downloading and uploading (issues related to cloud data transfer have been discussed here), or users may want to store the same data in multiple clouds for backup. Some may wish to undertake coordinated computing in different clouds, meaning that it will be necessary to access data from multiple clouds, with protocols, data structures and formats that support this interaction.
The option proposed was a "network virtual cloud" which, while not physically existing, would offer a set of functional characteristics intended to be generally supported by all cloud providers. Interactions between clouds would occur through this virtual cloud, with each connecting cloud translating its internal method of organising and manipulating data into a common format which is recognised across-the-board.