[intel] Work around broken reset mechanism in i219 devices
The i219 appears to have a seriously broken reset mechanism. After
any transmit or receive activity, resetting the card will break both
the transmit and receive datapaths until the next PCI bus reset.
The Linux and BSD drivers include a convoluted workaround authored by
Intel which involves setting a bit in the undocumented FEXTNVM11
register, then transmitting a dummy 512-byte packet containing garbage
data, then reconfiguring the receive descriptor prefetch thresholds
and temporarily reenabling the receive datapath. The comments in the
Intel fix do not even remotely match what the code actually does, and
the code accidentally leaves the transmitter enabled after use.
Experimentation suggests that an equivalent fix is to simply set the
undocumented bit in FEXTNVM11 before enabling the transmit or receive
descriptor rings.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
On most Intel NICs, Auto-Speed Detection Enable (ASDE) can be used to
automatically detect the correct link speed by sampling the link using
the internal PHY. This feature is automatically inhibited when not
appropriate for the physical link (e.g. when using internal SerDes
mode on the 8254x).
On the i350 datasheet ASDE is a reserved bit, but the relevant
auto-speed detection hardware appears still to be present. However,
enabling ASDE on the i350 1000BASE-KX backplane NIC seems to cause an
immediate link failure. It is possible that the auto-speed detection
hardware is still present, is not connected to a physical link, and is
not inhibited from being applied in this mode.
Work around this problem by adding an INTEL_NO_ASDE flag bit
(analogous to INTEL_NO_PHY_RST), and applying this for the i350
backplane NIC.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Some VF data is not cleared with reset, so make sure to return all the
settings to default before configuring the VF.
This fixes an issue where network packets would fail to be received if
the VF was previously used by the linux ixgbevf driver.
Modified-by: Michael Brown <mcb30@ipxe.org>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
On some models (notably ICH), the PHY reset mechanism appears to be
broken. In particular, the PHY_CTRL register will be correctly loaded
from NVM but the values will not be propagated to the "OEM bits" PHY
register. This typically has the effect of dropping the link speed to
10Mbps.
Since the original version of this driver in commit 945e428 ("[intel]
Replace driver for Intel Gigabit NICs"), we have always worked around
this problem by skipping the PHY reset if the link is already up.
Enhance this workaround by explicitly checking for known-broken PCI
IDs.
Reported-by: Robin Smidsrød <robin@smidsrod.no>
Tested-by: Robin Smidsrød <robin@smidsrod.no>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
[intel] Add support for mailbox used by virtual functions
Virtual functions use a mailbox to communicate with the physical
function driver: this covers functionality such as obtaining the MAC
address.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
[intel] Allow for the use of advanced TX descriptors
Intel virtual function NICs almost work with the use of "legacy"
transmit and receive descriptors (which are backwards compatible right
back to the original Intel Gigabit NICs).
Unfortunately the "TX switching" feature (which allows for VM<->VM
traffic to be looped back within the NIC itself) does not work when a
legacy TX descriptor is used: the packet is instead sent onto the
wire.
Fix by allowing for the use of an "advanced" TX descriptor (containing
exactly the same information as is found in the "legacy" descriptor).
Signed-off-by: Michael Brown <mcb30@ipxe.org>
[intel] Force RX polling on VMware emulated 82545em
The emulated Intel 82545em in some versions of VMware (observed with
ESXi v5.1) seems to sometimes fail to set the RXT0 bit in the
interrupt cause register (ICR), causing iPXE to stop receiving
packets. Work around this problem (for the 82545em only) by always
polling the receive queue regardless of the state of the ICR.
Reported-by: Slava Bendersky <volga629@networklab.ca>
Tested-by: Slava Bendersky <volga629@networklab.ca>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
[intel] Apply PBS/PBA errata workaround only to ICH8 PCI device IDs
ICH8 devices have an errata which requires us to reconfigure the
packet buffer size (PBS) register, and correspondingly adjust the
packet buffer allocation (PBA) register. The "Intel I/O Controller
Hub ICH8/9/10 and 82566/82567/82562V Software Developer's Manual"
notes for the PBS register that:
10.4.20 Packet Buffer Size - PBS (01008h; R/W)
Note: The default setting of this register is 20 KB and is
incorrect. This register must be programmed to 16 KB.
Initial value: 0014h
0018h (ICH9/ICH10)
It is unclear from this comment precisely which devices require the
workaround to be applied. We currently attempt to err on the side of
caution: if we detect an initial value of either 0x14 or 0x18 then the
workaround will be applied. If the workaround is applied
unnecessarily, then the effect should be just that we use less than
the full amount of the available packet buffer memory.
Unfortunately this approach does not play nicely with other device
drivers. For example, the Linux e1000e driver will rewrite PBA while
assuming that PBS still contains the default value, which can result
in inconsistent values between the two registers, and a corresponding
inability to transmit or receive packets. Even more unfortunately,
the contents of PBS and PBA are not reset by anything less than a
power cycle, meaning that this error condition will survive a hardware
reset.
The Linux driver (written and maintained by Intel) applies the PBS/PBA
errata workaround only for devices in the ICH8 family, identified via
the PCI device ID. Adopt a similar approach, using the PCI_ROM()
driver data field to indicate when the workaround is required.
Reported-by: Donald Bindner <dbindner@truman.edu>
Debugged-by: Donald Bindner <dbindner@truman.edu>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
As of commit d28bb51 ("[tcp] Defer sending ACKs until all received
packets have been processed"), increasing the RX ring size will
increase the number of received packets per transmitted ACK (since
each poll will process up to one complete receive ring). Under KVM,
this can make a substantial (up to ~200%) difference to the overall
download speed, since transmissions are very expensive.
Increase the ring fill level from four to eight packets: this
increases the download speed by around 50% at a cost of around 8kB of
heap space. Further speedups are possible by increasing the ring size
further, but it would be preferable to find alternative methods which
do not use noticeable amounts of heap space.
Tested-by: Robin Smidsrød <robin@smidsrod.no>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
[intel] Avoid completely filling the TX descriptor ring
It is unclear from the datasheets whether or not the TX ring can be
completely filled (i.e. whether writing the tail value as equal to the
current head value will cause the ring to be treated as completely
full or completely empty). It is very plausible that this edge case
could differ in behaviour between real hardware and the many
implementations of an emulated Intel NIC found in various virtual
machines. Err on the side of caution and always leave at least one
ring entry empty.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
[intel] Expose functionality to be shared with intelx driver
The Intel 10 Gigabit NICs have a datapath that is almost
register-compatible with the Intel 1 Gigabit NICs. Expose common
functionality to avoid duplication of code in the new "intelx" driver.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
[intel] Remove hardcoded offsets for descriptor ring registers
The Intel 10 Gigabit NICs use the same simplified (aka "legacy")
descriptor format and the same layout for descriptor register blocks
as the Intel 1 Gigabit NICs. The offsets of the descriptor register
blocks are not the same.
Simplify reuse of the existing code by removing all hardcoded offsets
for registers within descriptor register blocks, and ensuring that all
offsets are calculated using the descriptor register block base
address provided via intel_init_ring().
Signed-off-by: Michael Brown <mcb30@ipxe.org>
On i350 the datasheet contradicts itself in stating that the default
value of RXDCTL.ENABLE for queue zero is both set (according to the
"Receive Initialization" section) and unset (according to the "Receive
Descriptor Control - RXDCTL" section). Empirical evidence suggests
that the default value is unset.
Explicitly enable both transmit and receive queues to avoid any
ambiguity.
Signed-off-by: Michael Brown <mcb30@ipxe.org>