Much of my recent professional development has focused on Ethernet, making it a convenient target for technical articles.
Unlike much of the Internet, this series of articles will focus on practical implementation of an Ethernet MAC from an FPGA or ASIC perspective.
This means the necessary waveforms and encodings to generate ethernet packets when directly connected to a PHY or medium.
The articles will be making extensive references to IEEE 802.3-2022 and every effort will be made to specify the exact clauses for followup by the reader.
I will not be using the amendment names (e.g. 802.3z for Gigabit Ethernet) because they are not not useful for finding content within the actual standard and any given clause may have been modified by multiple amendments.
Note: The most recent version of the 802 standards are available from the IEEE Get program at no cost.
It is highly advised that anyone working with Ethernet download copies of 802.1Q (Bridges) and 802.3 (Wired Ethernet).
In this first article in a series on Ethernet, I will be focusing on the fundamentals of Ethernet, such as packet structure, check sequences, and flow control.
Much of the information will be conceptual but referenced repeatedly when discussing specific protocols.
Note: The most recent version of the 802 standards are available from the IEEE Get program at no cost.
It is highly advised that anyone working with Ethernet download a copy of 802.3 (Wired Ethernet).
With Fast Ethernet, Ethernet introduced something new.
Previously, the Physical Signaling layer was integrated into the MAC and connected to the actual media through a Media Attachment Unit.
While the user could switch between twisted pair (10Base-T), thinnet (10Base2), thicknet (10Base5), or even fiber (10Base-F) simply by changing MAUs, switching to a different encoding (e.g. Fast Ethernet) would require a completely new interface.
Instead of requiring new networking equipment to manage the PCS of each individual protocol, the MAC communicates with the PHY with a Media Independent Interface (MII).
Now, free of protocol-specific encodings, different line rates and protocols could be selected by simply switching between different PHYs, leaving the MAC unaffected.
While interchangeable PHYs is now the domain of high-end networking (e.g. SFP modules), the MII interface and its derivatives are the primary mechanism for connecting integrated MACs (and FPGAs) to commodity Ethernet transceivers.
One of the early complaints about MII was that it used too many pins.
For a switch ASIC and external PHYs, it would require sixteen pins and two clock domains per port.
For an eight port switch, that’s 128 pins and sixteen clock domains before power and other considerations.
As silicon became cheaper, packaging started to be the dominant cost for low-end integrated circuits.
In order to keep costs down, they needed a way to reduce the number of pins.
When introducing Gigabit Ethernet, there was a problem adopting the existing MII.
Simply increasing the clock speed another order of magnitude would bring it to 250 MHz and a period of 4 ns.
This introduced two issues:
First, the clock speed would be well in excess of those used by commodity memory buses of the time, making it difficult to implement.
Second, the system synchronous design of the transmit path would make it impossible to control setup and hold timing.
Even worse than the original MII, GMII used too many pins.
For full tri-mode (10/100/1000) operation, a full 25 were required.
This is becoming problematic not only for switches, but ordinary processors and FPGAs.
As when the RMII Consortium formed to produce RMII, a group of silicon makers got together to produce a Reduced Gigabit Media Independent Interface (RGMII).
Since this was performed external to the ethernet working group, this interface will not be found in IEEE 802.3.
The standard needs to be sourced separately.
With distribution largely unrestricted, copies of the standard are mirrored locally:
RGMII Version 1.3, Version 2.0.