Driving the World of Gigabit Ethernet

It's been a wild summer! Most of my time has been occupied working as Chief Technical Editor of the new IEEE 802.3z Gigabit Ethernet standard.[1] That standard, when complete, will advance the Ethernet operating speed to one billion bits per second (109 bits per second).

This summer, the Gigabit Ethernet effort moved into its final home stretch. We issued our first official IEEE working group ballot about six weeks ago. This move signals the achievement of a significant technical consensus among more than 650 people who are monitoring and working with our standards task force. So far, we have already received 1288 formal comments on this ballot, and anticipate receiving lots more between now and the time we are done, sometime in the Spring of 1998.

Needless to say, I've been busy. In case you are wondering what us standards weenies do at all those meetings, here’s a little peek into the standards process.

One of the key questions in front of our committee has been this: "How should we best specify the I/O performance of drivers for the Gigabit Ethernet parallel interface?" This interface, called the Gigabit Media Independent Interface (GMII) is used to connect higher-level computer system chips to the physical transceivers used to convey gigabit serial data. The GMII is a point-to-point, dual unidirectional interface, meaning that there is one set of wires defined for use in the transmit direction, and another separate set of wires defined for use in the receive direction. In each direction, there are eight parallel data bits, two control bits, and a clock (plus a few other signals). The interface is clocked at 125 MHz.

Clocking a unidirectional bus at 125 MHz is not a difficult issue with today’s logic. Ordinary CMOS circuits, at densities of .35 um or less, can do the job, and have often been used for such tasks.

The problem our committee faced, however, was not whether such an interface could be made to work, but how to specify the interface so that many different vendors could build it, and all the parts would be interoperable. You see, there are many different ways to get such a bus to work. For example, there is the choice of what method to use to control reflections and ringing on the bus. There are four popular methods in use today:

  1. Control the source impedance on the bus
  2. Control the end-termination impedance on the bus
  3. Control the driver rise/fall time
  4. Use a very short bus

Early on, we determined that, in order to cover the range of physical topologies necessary to build useful products around this interface, the bus lengths would need to stretch more than 1 or 2 inches, which is pretty much the limit at this speed for choice (4), the short bus method. It became apparent that some combination of terminations and controlled rise/fall time would be necessary. That left us with three alternatives.

At that point, our biggest obstacle was political, not technical. The various chip manufacturers all had different capabilities in terms of their rise/fall time, termination strategy, and output impedance control. Some wanted to control reflections by implementing a well-controlled output impedance on the drivers. Others wanted to use a very low-impedance driver, with an external series termination resistor. Still others wanted to use a loosely-controlled output impedance specification, but with tight control over the rise/fall time. Any of the approaches could work, but which one could the committee choose? Any direction we turned, there was powerful opposition.

In the end, it was Bill Quackenbush of Cisco Systems who came up with a way of specifying the drivers that would allow each vendor to individually trade off their rise/fall time, termination technique and driver output impedance. It is a simple, elegant technique. Faced with a similar problem, you might consider using it. Here’s what Bill proposed:

Connect the driver to one end (the near end) of a 1-ns, 50-ohm transmission line. Load the far end of the line with a 5-pF capacitor (this represents the receiver). If the manufacturer calls for a termination technique, use it as prescribed. Under these conditions, the waveform as measured at the receiver must fit within the prescribed waveform template, which limits the overshoot and transition time.

That’s it. There is no explicit specification of rise/fall time, termination technique or driver output impedance. The waveform at the receiver just has to fit within the prescribed template. Bill’s idea concentrates on the worst case topology of interest (about 6 inches with a 5-pF load), while still allowing the driver vendors to make their own design trade-offs. In practice, if a driver passes Bill’s test, it is highly likely to pass with shorter lengths and smaller loads.

Bill’s setup is a good specification technique, and a far cry from the overly-simplified "50-pF load" specifications we are used to seeing for output drivers. If our committee had thought of it earlier, it wouldn’t have been such a wild summer.

Note

[1]Thanks to Packet Engines for sponsoring my work on the Gigabit Ethernet Standard.