Marek Vavruša wrote:
We've also spent some time on other related
projects. Say the
comparison of the authoritative name servers, that you can find here:
https://www.knot-dns.cz/pages/benchmark.html
The whole effort is open source, you can try it yourself or even
create new test cases, any feedback is welcome.
https://gitlab.labs.nic.cz/labs/dns-benchmarking
Hi,
I wonder if you could publish more technical details about your
benchmark setup, especially the hardware CPU/NIC models, etc.? I see
you've tested "Intel 10 GbE" and "Intel 1GbE" network adapters,
but
these are large families of different hardware models. Apologies if I
overlooked these details somewhere.
I've recently been testing different models of Intel 1GbE adapters and
I've found a large variation in the maximum response rate that the same
DNS server can deliver depending on the network adapter -- for instance,
to take an extreme example, I was able to get over 100% more performance
from the latest Intel I350 "server" card against a very ancient Intel
82572EI "desktop" card, in an otherwise identical system. That is an
extreme example, but I still found large differences between the Intel
I210 and I217 adapters, which can be found together on a lot of current
generation single socket Xeon motherboards from Supermicro. And these
are all considered "Intel 1GbE" network adapters.
I would also be interested to know about the distribution of IP
addresses and port numbers in your benchmark DNS query traffic. If I
understand things correctly, Intel 1GbE (and probably 10GbE) adapters
that support multiple RX queues and "Receive Side Scaling" are usually
configured to select from the available RX queues based on a hash of
only the IP source and destination addresses, e.g.:
# ethtool -n <INTERFACE> rx-flow-hash udp4
UDP over IPV4 flows use these fields for computing Hash flow key:
IP SA
IP DA
That may result in a single RX queue processing the incoming DNS queries
in a benchmark if the queries are all sourced from a single IP address,
which may be detrimental. It may be advantageous to configure the
network adapter to also hash over the source and destination ports, if
supported, e.g.:
# ethtool -N <INTERFACE> rx-flow-hash udp4 sdfn
# ethtool -n <INTERFACE> rx-flow-hash udp4
UDP over IPV4 flows use these fields for computing Hash flow key:
IP SA
IP DA
L4 bytes 0 & 1 [TCP/UDP src port]
L4 bytes 2 & 3 [TCP/UDP dst port]
There are statistical counters available with the "ethtool -S" command
to verify if packets are being evenly balanced among the RX queues on a
network adapter with multiple RX queues.
It may also be advantageous to configure "Transmit Packet Steering" [0].
If I understand things correctly, network adapters with multiple TX
queues will only utilize a single TX queue until XPS is configured.
[0]
http://www.mjmwired.net/kernel/Documentation/networking/scaling.txt#364
--
Robert Edmonds
edmonds(a)debian.org