R F Design
Feb 15, 2012
The need for processing speed and rapid access to
large amounts of data has made serial architectures the dominant
interconnect technologies for military and aerospace systems.
Three things designers can never get enough of are computing power,
data storage, and interconnect bandwidth. The latter of these becomes
more important as computing solutions continue to incorporate multicore
processor architectures. While parallel bus architectures like VME and
ISA bus are not going away, they cannot deliver the speed and bandwidth
necessary for high-end applications found in many military and aerospace
systems—such as the performance levels provided by the 3U VPX backplane
found in the ESP-A8161 1/2 ATR box (see figure) from Elma Bustronic.
High-speed interconnect technologies are packet-oriented, point-to-point, and mostly serial interfaces, and often scalable to enhanced support by increasing the number of lanes. The quintessential example is PCI Express. The standard defines x1, x2, x4, x8, x16, and x32. The x32 links are not in use and x2 links are starting to show up in some chipsets. The x1, x4, x8, and x16 are quite common. Multiple devices are connected via a switch.
PCI Express (PCIe) is the de facto standard for host-to-peripheral interfaces. Processor chips and chip sets that have other high speed serial interfaces like Ethernet and SATA will typically have one or more PCIe interfaces as well. PCIe is also used to connect a host to adapter chips for other high speed interfaces including Ethernet, SATA, SAS, Serial RapidIO, InfiniBand, and Fibre Channel, as well as USB, HDMI, and DisplayPort.
The second-generation PCIe (Gen 2) standard runs at 5 Gb/s and employs an 8-b/10-b encoding scheme; it is backwards compatible, as are the latest PCIe Gen 3 and future Gen 4 standards.
CIe Gen 3 offers state-of-the-art right performance at present with a lane running at 8 Gb/s. It switches to a 128-b/ 130-b encoding scheme to essentially double the throughput of the Gen 2 standard when combined with the 60% increase in lane speed.
While PCIe Gen 4 will run lanes at 16 Gb/s, it is not planned for release until 2015; Gen 3 adoption has not been as fast as that for Gen 2. PCIe Gen 3 includes a number of enhancements such as atomic operations, dynamic power allocation, and multicast support. Gen 4 is likely to include a more flexible approach to implementation, allowing developers to trade off distance, bandwidth, and power. PCIe is already used for box-to-box connections that were not possible with older parallel PCI interfaces.
Storage interfaces, especially SAS RAID controllers, are connected via PCIe. SATA and Serial Attached SCSI (SAS) grew from parallel ATA and SCSI roots. The slow migration from parallel to serial storage interfaces is essentially complete with 6-Gb/s SATA being the norm for consumer and embedded drives. These days, the drive could as easily be a flash drive instead of a hard disk drive. SATA and SAS unified the serial interface and connectors, so it is usually possible to plug a SATA drive into a SAS controller (although the reverse is not true).
The current crop of SAS drives also runs at 6 Gb/s, but 12 Gb/s SAS is now emerging in the enterprise. These will also be valuable in embedded applications that require higher throughput storage. SATA and SAS controllers often deliver higher throughput than a single drive is capable of when supporting multiple drives.
Solid state disks (SSD) have pushed the limits of SATA and SAS. Linking storage directly via PCIe allows designers to take advantage of PCIe’s higher bandwidth. Proprietary PCIe/flash solutions like those from Fusion-io delivered bandwidths in excess of what SATA or SAS could provide. Likewise, the number of transactions rose significantly.
Two new standards based on PCIe are Non-Volatile Memory Express (NVMe) and SCSI Express. NVMe targets board-level interfaces with flash storage on-board. This is also true for SCSI Express, although what is behind the interface could also be a SATA or SAS drive. This could be an advantage for SCSI Express that could provide a single interface for board- and drive-based storage.
Fibre Channel is a high-speed serial interface that is primarily found in enterprise systems. It supports a number of topologies including point-to-point, loop, and switched fabrics. Copper interfaces can run at 2, 4, and 8 Gb/s. Optical signal transport is also in the mix; for example, 16-Gb/s Fibre Channel (16GFC) is an optical interface. Hard disk drives with native 4-Gb/s Fibre Channel interfaces are still available. FICON (Fibre Channel Connection) is a mainframe interconnect that operates over fiber runs to 20 km.
For completeness, some storage protocols support a variety of general-use applications, such as iSCSI and Fibre Channel over Ethernet (FcoE). Storage protocols tend to map almost directly to physical storage devices, such as magnetic and solid-state drives. iSCSI is popular in the virtualized cloud environment. Not surprisingly, iSCSI over InfiniBand is part of the mix via iSER (iSCSI RDMA).
Ethernet is one of the more versatile interfaces, carrying just about everything (including storage protocols). A wide range of Ethernet speeds may be found on a network, from 10 Mb/s to 100 Gb/s (100G). The dominant deployment at present is 1-Gb/s (1G) Ethernet, while 10/100 Ethernet is commonly supported on single-chip microprocessors. The high end of Ethernet in common use is 10G Ethernet, with 40G Ethernet quickly gaining in popularity. The fastest Ethernet speeds, 100 Gb/s, are often found in enterprise and high-end embedded systems. The big difference at 40 and 100 Gb/s is that optical fiber is the cabling standard, with copper in the works for short runs that is useful for the backplane. Optical support has been available for all Ethernet speeds, but copper cabling dominates at low and midrange speeds.
High-speed serial backplanes, like those found on VPX systems, support a range of interconnects, though Ethernet tends to be a dominant player. Many of these backplanes sometimes include Ethernet as a secondary network. In these cases, Serial RapidIO and InfiniBand are the primary players. Ethernet provides a best-effort delivery, whereas Serial RapidIO, InfiniBand, and PCIe guarantee delivery at the hardware level. Features like low latency, flow control, packet size, and low protocol overhead also distinguish these alternatives from Ethernet.
Serial RapidIO handles packets up to 256 B. It provides a switched peer-to-peer network. It has a scaled lane approach like PCIe with x1, x2, x4, x8, and x16 configurations. Lane speeds include 1.25, 2.5, 3.125, 5.0, and 6.25 GHz. The future 10xN specifications define a speed of 10 GHz/lane with the technology scaling to 25 GHz.
Serial RapidIO interfaces can be found on a number of digital-signal-processing (DSP) and central-processing-unit (CPU) chips. Serial RapidIO has been popular in embedded applications such as multifunction phased-array radar systems where large amounts of sensor information is being passed to computational cores. Its real-time performance and scalability also makes Serial RapidIO ideal for embedded applications. The x86 family of CPUs were rarely applied in Serial RapidIO systems, but this is changing with the availability of the PCIe/Serial RapidIO bridge chip from IDT and low-power, 64-b x86 processors.
InfiniBand has a maximum 4 kB packet size. It supports x1, x4, x8, and x12 lanes. Common speeds include 2.5-Gb/s Single Data Rate (SDR), 5-Gb/s Double Data Rate (DDR), and 10-Gb/s Quad Data Rate (QDR) InfiniBand which use 8-b/10-b encoding. The 14-Gb/s Fourteen Data Rate (FDR) version of InfiniBand uses 64-b/66-b encoding. The 26-Gb/s Enhanced Data Rate (EDR) version of InfiniBand is targeted for future high-speed applications.
InfiniBand was initially developed with storage and supercomputing applications in mind; a majority of the top supercomputers employ it. One feature that developers can take advantage of is Remote Direct Memory Access (RDMA). It is a useful function this InfiniBand feature can be done over Ethernet using RoCE (RDMA over Converged Ethernet). Of course, the overhead is significantly higher with Ethernet.
It is also possible to tunnel Ethernet over InfiniBand using Ethernet over InfiniBand (EoIB). There is also IP over InfiniBand (IPoIB). There is even a standard for Fibre Channel over InfiniBand (FCoIB). These technologies allow an InfiniBand-only fabric to link nodes directly to other networks. Mellanox’s SwitchX chips actually provide this in silicon IC form, handling Ethernet, InfiniBand, and Fibre Channel links.
All of the these high speed serial technologies are found in the latest military and avionic platforms. Some, like Fibre Channel, are specialized enough to be found in very specific applications. In this case, it would be enterprise platforms or embedded applications requiring lots of storage bandwidth.
Lastly, it should be noted that there are a number of other high-speed serial interconnects that were not addressed above. These include display interconnects like HDMI and DisplayPort and peripheral interconnects like Universal Serial Bus (USB). SuperSpeed USB 3.0 runs at 5 Gb/s. It is full duplex and point-to-point but it is restricted to a single lane.
The nice feature of all these high-speed interconnects is that the wiring technology is identical or very similar. VITA’s VXS and VPX standards use the same connectors for all the wired implementations. Point-to-point connections for networks like Ethernet, Serial RapidIO, and InfiniBand can be selected based on design requirements.
High-speed interconnect technologies are packet-oriented, point-to-point, and mostly serial interfaces, and often scalable to enhanced support by increasing the number of lanes. The quintessential example is PCI Express. The standard defines x1, x2, x4, x8, x16, and x32. The x32 links are not in use and x2 links are starting to show up in some chipsets. The x1, x4, x8, and x16 are quite common. Multiple devices are connected via a switch.
PCI Express (PCIe) is the de facto standard for host-to-peripheral interfaces. Processor chips and chip sets that have other high speed serial interfaces like Ethernet and SATA will typically have one or more PCIe interfaces as well. PCIe is also used to connect a host to adapter chips for other high speed interfaces including Ethernet, SATA, SAS, Serial RapidIO, InfiniBand, and Fibre Channel, as well as USB, HDMI, and DisplayPort.
The second-generation PCIe (Gen 2) standard runs at 5 Gb/s and employs an 8-b/10-b encoding scheme; it is backwards compatible, as are the latest PCIe Gen 3 and future Gen 4 standards.
CIe Gen 3 offers state-of-the-art right performance at present with a lane running at 8 Gb/s. It switches to a 128-b/ 130-b encoding scheme to essentially double the throughput of the Gen 2 standard when combined with the 60% increase in lane speed.
While PCIe Gen 4 will run lanes at 16 Gb/s, it is not planned for release until 2015; Gen 3 adoption has not been as fast as that for Gen 2. PCIe Gen 3 includes a number of enhancements such as atomic operations, dynamic power allocation, and multicast support. Gen 4 is likely to include a more flexible approach to implementation, allowing developers to trade off distance, bandwidth, and power. PCIe is already used for box-to-box connections that were not possible with older parallel PCI interfaces.
Storage interfaces, especially SAS RAID controllers, are connected via PCIe. SATA and Serial Attached SCSI (SAS) grew from parallel ATA and SCSI roots. The slow migration from parallel to serial storage interfaces is essentially complete with 6-Gb/s SATA being the norm for consumer and embedded drives. These days, the drive could as easily be a flash drive instead of a hard disk drive. SATA and SAS unified the serial interface and connectors, so it is usually possible to plug a SATA drive into a SAS controller (although the reverse is not true).
The current crop of SAS drives also runs at 6 Gb/s, but 12 Gb/s SAS is now emerging in the enterprise. These will also be valuable in embedded applications that require higher throughput storage. SATA and SAS controllers often deliver higher throughput than a single drive is capable of when supporting multiple drives.
Solid state disks (SSD) have pushed the limits of SATA and SAS. Linking storage directly via PCIe allows designers to take advantage of PCIe’s higher bandwidth. Proprietary PCIe/flash solutions like those from Fusion-io delivered bandwidths in excess of what SATA or SAS could provide. Likewise, the number of transactions rose significantly.
Two new standards based on PCIe are Non-Volatile Memory Express (NVMe) and SCSI Express. NVMe targets board-level interfaces with flash storage on-board. This is also true for SCSI Express, although what is behind the interface could also be a SATA or SAS drive. This could be an advantage for SCSI Express that could provide a single interface for board- and drive-based storage.
Fibre Channel is a high-speed serial interface that is primarily found in enterprise systems. It supports a number of topologies including point-to-point, loop, and switched fabrics. Copper interfaces can run at 2, 4, and 8 Gb/s. Optical signal transport is also in the mix; for example, 16-Gb/s Fibre Channel (16GFC) is an optical interface. Hard disk drives with native 4-Gb/s Fibre Channel interfaces are still available. FICON (Fibre Channel Connection) is a mainframe interconnect that operates over fiber runs to 20 km.
For completeness, some storage protocols support a variety of general-use applications, such as iSCSI and Fibre Channel over Ethernet (FcoE). Storage protocols tend to map almost directly to physical storage devices, such as magnetic and solid-state drives. iSCSI is popular in the virtualized cloud environment. Not surprisingly, iSCSI over InfiniBand is part of the mix via iSER (iSCSI RDMA).
Ethernet is one of the more versatile interfaces, carrying just about everything (including storage protocols). A wide range of Ethernet speeds may be found on a network, from 10 Mb/s to 100 Gb/s (100G). The dominant deployment at present is 1-Gb/s (1G) Ethernet, while 10/100 Ethernet is commonly supported on single-chip microprocessors. The high end of Ethernet in common use is 10G Ethernet, with 40G Ethernet quickly gaining in popularity. The fastest Ethernet speeds, 100 Gb/s, are often found in enterprise and high-end embedded systems. The big difference at 40 and 100 Gb/s is that optical fiber is the cabling standard, with copper in the works for short runs that is useful for the backplane. Optical support has been available for all Ethernet speeds, but copper cabling dominates at low and midrange speeds.
High-speed serial backplanes, like those found on VPX systems, support a range of interconnects, though Ethernet tends to be a dominant player. Many of these backplanes sometimes include Ethernet as a secondary network. In these cases, Serial RapidIO and InfiniBand are the primary players. Ethernet provides a best-effort delivery, whereas Serial RapidIO, InfiniBand, and PCIe guarantee delivery at the hardware level. Features like low latency, flow control, packet size, and low protocol overhead also distinguish these alternatives from Ethernet.
Serial RapidIO handles packets up to 256 B. It provides a switched peer-to-peer network. It has a scaled lane approach like PCIe with x1, x2, x4, x8, and x16 configurations. Lane speeds include 1.25, 2.5, 3.125, 5.0, and 6.25 GHz. The future 10xN specifications define a speed of 10 GHz/lane with the technology scaling to 25 GHz.
Serial RapidIO interfaces can be found on a number of digital-signal-processing (DSP) and central-processing-unit (CPU) chips. Serial RapidIO has been popular in embedded applications such as multifunction phased-array radar systems where large amounts of sensor information is being passed to computational cores. Its real-time performance and scalability also makes Serial RapidIO ideal for embedded applications. The x86 family of CPUs were rarely applied in Serial RapidIO systems, but this is changing with the availability of the PCIe/Serial RapidIO bridge chip from IDT and low-power, 64-b x86 processors.
InfiniBand has a maximum 4 kB packet size. It supports x1, x4, x8, and x12 lanes. Common speeds include 2.5-Gb/s Single Data Rate (SDR), 5-Gb/s Double Data Rate (DDR), and 10-Gb/s Quad Data Rate (QDR) InfiniBand which use 8-b/10-b encoding. The 14-Gb/s Fourteen Data Rate (FDR) version of InfiniBand uses 64-b/66-b encoding. The 26-Gb/s Enhanced Data Rate (EDR) version of InfiniBand is targeted for future high-speed applications.
InfiniBand was initially developed with storage and supercomputing applications in mind; a majority of the top supercomputers employ it. One feature that developers can take advantage of is Remote Direct Memory Access (RDMA). It is a useful function this InfiniBand feature can be done over Ethernet using RoCE (RDMA over Converged Ethernet). Of course, the overhead is significantly higher with Ethernet.
It is also possible to tunnel Ethernet over InfiniBand using Ethernet over InfiniBand (EoIB). There is also IP over InfiniBand (IPoIB). There is even a standard for Fibre Channel over InfiniBand (FCoIB). These technologies allow an InfiniBand-only fabric to link nodes directly to other networks. Mellanox’s SwitchX chips actually provide this in silicon IC form, handling Ethernet, InfiniBand, and Fibre Channel links.
All of the these high speed serial technologies are found in the latest military and avionic platforms. Some, like Fibre Channel, are specialized enough to be found in very specific applications. In this case, it would be enterprise platforms or embedded applications requiring lots of storage bandwidth.
Lastly, it should be noted that there are a number of other high-speed serial interconnects that were not addressed above. These include display interconnects like HDMI and DisplayPort and peripheral interconnects like Universal Serial Bus (USB). SuperSpeed USB 3.0 runs at 5 Gb/s. It is full duplex and point-to-point but it is restricted to a single lane.
The nice feature of all these high-speed interconnects is that the wiring technology is identical or very similar. VITA’s VXS and VPX standards use the same connectors for all the wired implementations. Point-to-point connections for networks like Ethernet, Serial RapidIO, and InfiniBand can be selected based on design requirements.