Speaker
Description
Summary
PCI Express represents a radical move from traditional I/O
architectures in that it replaces parallel multi-drop
busses with serial multi lane switched point-to-point links.
Being each lane bi-directionally driven at 2.5 Gbit/s,
the capacity of the channel after 8b/10b encoding is fixed at
250 MByte/s times the number of lanes. While this
bandwidth outperforms the capacity of former standards,
the points worth considering for our goal are not only
concerned with speed. The new serial technology adopts
a “communication centric” approach: the load-store
operations between two nodes are performed exchanging
framed packets in accordance to a suite of stacked
protocol layers taking care of the physical, link and
transaction issues of the channel. In case of PCI Express all
these activities are carried out at the hardware level, with
no software intervention. Clearly this load-store model
logically matches the model of field bus control in which a
host and a networked peer node exchange software
arranged packets to access memory and registers of the
field bus for I/O operation. To investigate and further
extend this parallelism to its practical consequences we
have addressed a number of activities in the context of
the INFN Gr. V funded project LINCO.
LINCO project was involved in research and design for the
development of an optical adapter that translates PCI
express signals to/from the optical physical layer and which,
using commercial bridges, could be fitted into legacy
bus standards (PCI, CompactPci, VME).
First of all we had to investigate a non standardized physical
medium for the PCI Express protocol, namely
running on an optical fiber link. The figure of total jitter
reserved for the interconnection by the specifications
deserves careful attention, so strict measurements of jitter
and data signal integrity were done to characterize
the link.
Then LINCO project resolved to develop a Printed Circuit
Board able to translate the PCI to PCIe protocol and then
to convert the PCIe electrical signals to optical signals. We
chose the PCI Mezzanine Card (PMC) standard in order
to meet different standards of field bus like VME or Compact
PCI. Two prototypes have been assembled and
configured for reverse and forward operation. The reverse
type has been accommodated in a PMC to PCI adapter
to fit into a host PC, while the forward type has been tested
with a passive adapter in a CompactPCI crate and
with an active adapter (hosting a Tundra UniverseII PCI to
VME bridge) in a VME environment. The two boards
were linked by a 100m multimode fiber with Intel SFP optical
transceivers for data and clock paths. In the
CompactPCI case we could trasfer data at the full legacy
PCI throughput (132 MB/s) to a remote device, while in
the VME case transfers were achieved on a VME slave the
with a 2 to 3 µs single access latency.
To qualify our design for LHC use we have setup a radiation
test with protons of 63 MeV energy. The fluence
measured on the board was 5·1010 p/cm2 corresponding to
a total ionizing dose of less than 7 Krad. During the
test, the boards were active and a certain number of VME
registers were continuously written with random
patterns, read back and compared for SEU checking and
logging. We could observe only a negligible total dose
effect on the board itself but a consistent dose effect on the
SFP transceivers. To ease operation at LHC, a
watchdog-type mechanism for automatic reset has been put
in place using the Altera programmable logic device
on the board. The watchdog drives the gate of the high
current MOS switch in series with the main power supply
of the board and is continuously reset through the GPIO bus
of the bridge.