I have investigated further and want to share my results with other efusA9 or i.MX6 users as well as the developers at F&S.
Environment:
- Board: efusA9 (Single Core CPU)
- OS: XIPiMX6_C8E_V150_BETA_150609.bin (WEC2013)
- Host-PC NIC: Intel PRO/1000 PT Server Adapter
- Host-OS: Windows 7
- Switches: EKI-2725 for Gigabit, Netgear FS108 for 100 MBit/s
- FTP Tool: Total Commander
- UDP App: Custom
Results:
The Network performance is only a problem at 1 GBit/s. At 100 MBit/s everything is fine. I did not try whether I could force the link speed to 100 MBit/s with a registry setting. I just used a 100 MBit/s Switch. One cannot simply set the mode to 100 Mbit/s Full Duplex, because that would result in an duplex mismatch when connected to an auto-negotiating link partner, further resulting in packet loss. The only option to force the efusA9 to 100 MBit/s would be to set it to 100 MBit/s HALF DUPLEX, which is not desirable, as collisions might occur.
At Gigabit the performance heavily depends on the IEEE 802.3x flow control mechanism. I was only able to reach about 200 kByte/s FTP Transfer speed on Gigabit connections, regardless whether using a direct connection or a Gigabit switch.
PLEASE NOTE: There is obviously an translation error in the german Intel PRO/1000 network driver dialog which allows setting the Flow Control Mode.
German version: "Tx aktiviert: Der Adapter hält die Übertragung an, wenn er einen Flusssteuerungs-Frame von einem Verbindungspartner empfängt."
English version: "Tx Enabled : The adapter generates a flow control frame when its receive queue reaches a predefined limit."
The two descriptions contradict. I used this setting and accidently turned off the reacting on the Flow Control PAUSE Frames.
Therefore using the "Tx Enabled" mode in an direct connection to the efusA9 results in overrunning the i.MX6 Ethernet device with too many frames per time unit. That happens when sending an UDP packet with a size larger than an Ethernet frame (e.g. 18088 bytes) which is fragmented into multiple IP packets packets/frames. The efusA9 seems to lose at least one fragment and is unable to reassamble the whole packet. The interesting point is the following: In this situation (overrunning the efusA9 without reacting on the Flow Control PAUSE Frames) the TCP Performance is much better at about 6500 kByte/s.
Conclusio:
The best solution for my needs at the moment (WEC2013 application development) is the use of a 100 MBit/s switch. It ensures not packet loss in UDP (application protocol) and fast TCP communication (needed for deployment and debugging). When the product gets shipped, it will be connected to Gigabit switches and operated via UDP and normally no TCP communication will occur then (which would be very slow...). So I am lucky and can live with the drawbacks for now. But what if i would have to implement some TCP application in future? Or someone else has an TCP application, operating on a Gigabit Switch? There should be made improvement to the 200 kByte/s limit.