With the addition of cloud services (for example, Apple iCloud) to the already massively connected Internet, data centers are seeing an unprecedented increase in sheer compute and storage requirements. This growth directly impacts energy consumption. As it grows, engineers are seeking solutions to keep the power under control. In this article we examine specifically the interconnect power budgets as massively connected systems move beyond 10 gigabit per second (Gbps) interconnects and solutions to lower power consumption in these high-speed channels.
Today it is without question that Internet traffic is increasing at a rapid pace. The latest Visual Networking Index (VNI) forecast from Cisco (June 2011) clearly shows this trend (see Figure 1) with the interesting part of the forecast in the growth of the mobile space. With the introduction of “cloud” computing and storage, a new paradigm has been introduced driving an even larger consumption of bandwidth. As mobile users move from simple texts to high definition photos and videos, product’s that replicate these to cloud storage, transcode the videos for publication, and replicate the media to the user’s myriad of devices (not to mention publishing to social networks) will place even larger demands on these services. This performance pressure ultimately will require improvements in both processing and communications.
Figure 1. Internet bandwidth trends 2010 through 2015
However, these increases come at a cost – not only in a monetary sense, but also in power consumption. The designers of these next generation servers and networks are already struggling with the power being consumed – both from a cost of ownership (CoO) perspective and a practical thermal design point of view. How will systems be architected to both improve performance and reduce power? This is a never-ending battle driven by the explosive growth of the information age.
Where to look first
As in any system design, the next generation should improve on the performance of the last. In the architecture of cloud computing, services are often moved as loading changes. No longer is a “server” really a discrete piece of hardware. In most cases, the actual hardware hosting a service may be anywhere within a service provider’s infrastructure, which introduces a sort of “uncertainty” of where it is at any moment in time. This type of performance throttling is referred to as “virtualization” or the encapsulation of a service within a software framework that allows it to move freely between hardware hosts. This allows service providers the ability to vary the resources on demand and improve the power consumption of the infrastructure.
As services are throttled, there is a great deal of “machine-to-machine” (M2M) activity. In most data centers, most of the traffic is between machines and not connected to the outside world. The simple addition of virtualization has driven the need to migrate from one gigabit per second interconnects (standard on many mid-decade servers) to 10 Gbps. Today, the demand is driving the move to 25 Gbps interconnects. Many of these connections are less than 5 meters with the majority less than one meter in length. They exist this way due to the architecture of the server farms. A single rack will have blade centers stacked and connected to a switch at the top (or bottom) of the rack. Many racks in a row are then aggregated through concentrators where the information is routed to other rows of servers or network attached storage.
With one gigabit connections, small gauge wires could easily carry the bits without considerable loss of signal integrity. This was important for several reasons – one important consideration is the reduction of airflow out of the servers due to wires blocking the outflows. Another is the bend radius which dictates how many wires you can route in the rack (see Figure 2).
Figure 2. Cable wiring within a rack
With the move to 10G Ethernet, signal integrity became more of an issue and passive cables started using larger gauge wire to compensate. The airflow / bend radius issues began to show up and installers / designers started looking to fiber interconnects as a way to fix the problem. This move to fiber introduced several issues such as increased cost and power consumption. A typical single 10G Ethernet SFP+ module dissipates about a watt of power. With tens of thousands of ports, the amount of power required just for the fiber interconnects increased significantly, along with issues introduced by the increased power dissipation – a rise in rack temperature.
Cabling interconnect issues
If passive cable used for high-speed interconnects suffers from bulk and bend radius issues, then fiber solutions suffer from increased power consumption and higher cost. It would seem that there must be a middle ground to solve this issue. The answer lies in a technology called “active copper cable” – a clever idea that embeds active components into the shell of the connectors to compensate for the high frequency loss introduced by smaller gauge wire. This solution allows smaller wires which have a “fiber like” bend radius and bulk while dissipating much lower power. This typically is less than 65 mW per channel at 10 Gbps for devices such as the DS100BR111, which is commonly used in SFP+ active cable applications.
The technology of improving the signal integrity of cables is limited to lengths less than 15 meters in most cases for 10 Gbps Ethernet. However, as mentioned earlier, most interconnects are less than three meters allowing for easy replacement of passive or fiber cables with active copper versions. This is actually very common today for 10 Gbps interconnects. However, the future is approaching rapidly and even 10 Gbps interconnects will not be fast enough.
In the world of fiber interconnects, there are basically two realms:1) interconnects for short (less than a kilometer); and 2) long (much greater than a kilometer) communications. The longer fiber interconnects form the back-bone of our modern Internet infrastructure and has commonly used 100 Gbps WDM fiber optic technology. To lower the cost of this technology, companies that include Google, Brocade Communications, JDSU and others ratified in March of 2011 a 10 x 10 Gbps multi source agreement (MSA) for a physical medium dependant (PMD) sub-layer that would provide a common architecture for a C-Form factor (CFP) module.
The CFP connector is fine for the low count / long distance interconnects requiring 100 Gbps communications. However, SFP and quad small form-factor pluggable (QSFP) connectors provide much higher density, which is required on local switches and routers. The QSFP form factor is used today for 40 Gbps Ethernet by combining four channels of 10 Gbps data. The next evolution will be to move from 10 Gbps to 25 Gbps channels. This will provide the equivalent of 100 Gbps of data traffic over small QSFP connectors as well as provide a backward compatibility mode for 40 Gbps Ethernet systems that do not support the 100 Gbps standard. Ultimately, this form factor could be used for fiber modules since the 10-to-4 lane conversion used in the CFP modules will no longer be required.
This type of technology has been already demonstrated by several vendors an,d once again, provides infrastructure designers with a road-map to higher speed interconnects. But the interconnection at the back of a switch or server is not the only area where this problem will appear. The electrical interconnects within the servers and network attached storage devices are seeing the same problems.