the future of networking, at the speed of light

What's the purpose of playing 2800 videos at the same time? Well, it’s all about showing the future of networking (and IT) in the next few years in fact, and this is no small change. Let’s dig into this forthcoming revolution in the way that we will surf the Web, entertain ourselves and do business with Soumik Sinharoy who is managing the "low latency systems project" at Orange Silicon Valley, in downtown San Francisco.

[Soumik Sinharoy, Orange Silicon Valley, photo cc, 2012 by Orange http://live.orange.com]

latency: a thorn in the side of IT

If you think that the world has gone faster recently, which in turn has led to more globalisation and the ability to access data and systems everywhere at any time (aka Cloud Computing), you haven't seen anything yet. For there is one thing which has plagued the world of real time IT for years on end: "latency". Latency is basically the delay which is added on data transmission over a network due to the distance and the time it takes to convey that data from one end of the network to the other. It may sound trivial, but it's not, because there is one thing that network engineers have never really been able to get rid of: distance.

data replication: a necessary evil

With TCP/IP: the widely known and used Internet protocol, the results have been less than average. "Latency over fibre with TCP/IP is rated at 5 µs per kilometre" Sinharoy explains "this means that you are losing 1 ms every 200 km". This isn't much of a problem if you are replicating data as in the old days (i.e. copying vast amounts of information across networks in order to reduce latency and serve it as close to the end-user as possible); but it is a real issue over long distances within continents, and even more so, across continents. Now, with the advent of Cloud Computing and the rise of real-time database access, data replication is no longer on the agenda. Therefore, new technologies and innovations are required.

more with less thanks to "Infiniband™"

Researchers from Orange Silicon Valley have teamed up with ESnet, InfiniBand Trade Association and OpenFabrics Alliance have put together a demonstration of what the future has in store for us. Through the simulation of a long-distance network, they were able to reveal a significant decrease in latency, so as to get closer to… the speed of light! This ( i.e. 299,792,458 metres per second in vacuum, in reality a little more) is the end limit. With RDMA (Remote Direct Memory Access), "server-side latency scales down from 5 µs of 5 ns" Sinharoy explains. "This is what Infiniband™ is making possible; it is an interconnection protocol aimed at overcoming some of the shortcomings of TCP/IP" he added; “it enables remote access to distant memory (RDMA) and it performs better between servers than TCP/IP".

Infiniband

[figure detailed performance and utilisation chart, copyright 2011, Orange Silicon Valley]

What it means in essence is that data can not only be read remotely without having to replicate it, but it can also be written at a distance. Now, imagine the impact on the development of cloud computing! Numbers provided by Orange Silicon Valley’s low latency team (see above) support this claim: with a 40 G RDMA link provides 3.5 to 4 times more streaming efficiency than standard TCP/IP whereas a 10G RDMA link delivers almost the same efficiency as a 40G TCP/IP link. The reason for this is simple: “TCP/IP is a lossy protocol, that is to say that packets follow different routes and are sometimes lost, whereas RDMA is a lossless protocol”. This doesn’t mean that TCP/IP is going but that other protocols exist which will enhance the use of the Internet through the combination of such protocols wherever applicable.

Besides, by replacing large symmetric multiprocessors with commodity grid servers (blades) clustered with Infiniband™ interconnect Sinharoy’s team was able to demonstrate that the cost of platforms could be reduced by 90% for the same performance.

applications and future plans

Applications for this new technology abound (see box #1). So far, Sinharoy’s team’s demonstrations have taken place in the United States with the help of local partners and authorities which have supported the research project carried out by Orange. In the future will be focusing on how this innovation can be taken one step further, across continents and ... below the sea.

A very interseting challenge, and I'll personally keep an eye on that, namely when I next pay a visit to our Orange friend in Silicon Valley.

box #1 potential applications of the low latency project to solve real-life business issues

  1. IT assets investment: one of the first applications of the low latency project in these times of crisis, is the reduction of investments in IT assets; the low latency demonstration by Orange Silicon Valley showed 90% reductions in costs for the same performance,
  2. increase in core network performance: another possibility is to use such Infiniband™ clusters of RDMA protocols within the core network of a Telco in order to improve data transmission rates and bandwidth. In this particular case, even applicable to transatlantic allegations for instance, the new protocol could be combined with TCP/IP in order to improve Internet performance over long distances,
  3. fast trading solutions: 73% of world trading, according to Tabb group, is done through algorithmic trading (as of 2009) versus 33% only in 2006. Out of these 73%, a very important, subset is what is called "high-frequency trading" which is happening closer and closer to the exchange (sometimes algorithmic trading platforms are installed by trading companies on exchange premises proper in order to reduce latency). A lot of the work done through high-frequency trading is managed through what is called arbitrage, that is to say the ability to compare trading discrepancies over various marketplaces. Algorithmic trading could be a significant beneficiary of such fast networking solutions,
  4. content delivery (such a media streaming) is another potential candidate for low latency and network efficiency solutions. One of the main downsides of rich media is its intense bandwidth usage which is creating a lot of overhead over ISP and enterprise networks,
  5. internal IT, eventually, is also a potential user of low latency solutions in lieu of data replication which is more costly and less effective.

box #2 history of the low latency systems project

  • 2008: first external presentation of an Infiniband™ cluster and communication to the Infiniband™ trade Association
  • 2009: first deployment of Infiniband™ in production mode in the Paris Orange data centre on behalf of the Telco’s internal IT
  • 2010: orange Silicon Valley qualified DB2 purescale database on blades for the first time using Infiniband™ clusters in the US
  • October 2011: demo by Orange Silicon Valley at IBM information on demand show in last Vegas (see presentation on slight share)
  • 2012: plan to do highly concurrent video streaming use cases using Infiniband™ for global range (across continents)

more about Infiniband™ and the low latency project

Yann Gourvennec

I specialize in information systems, HighTech marketing and Web marketing. I am author and contributor to numerous books and the CEO of Visionary Marketing. As such, I contribute regularly on this blog for Orange Business account on cloud computing and cloud storage topics.