Hi
We're noticing that as our list of zones gets larger (about 480k right now), adding a new zone or deleting an existing zone seems to continue to get slower. We are always doing our modifications as part of a transaction, and the time appears to occur in the commit phase.
An example timing.
# time /opt/knot/sbin/knotc ... conf-begin
OK
real 0m0.010s
user 0m0.000s
sys 0m0.010s
# time /opt/knot/sbin/knotc ... conf-unset zone.domain example.com
OK
real 0m0.010s
user 0m0.000s
sys 0m0.010s
# time /opt/knot/sbin/knotc ... conf-commit
OK
real 0m2.330s
user 0m0.000s
sys 0m0.009s
#
As you can see, it took > 2 seconds to commit the transaction that removes just the example.com zone. Similarly, it takes > 2 seconds to commit the transaction that adds the zone back.
Given the time is real time and not sys/user, I presume knotc is waiting on knotd to complete the work. I used perf to record a CPU profile of knotd while the commit was running, but nothing hugely stuck out at me.
10.75% knotd libc.so.6 [.] __memcmp_avx2_movbe ◆
6.03% knotd knotd [.] __popcountdi2 ▒
5.89% knotd knotd [.] ns_first_leaf ▒
5.25% knotd libc.so.6 [.] pthread_mutex_lock@@GLIBC_2.2.5 ▒
3.85% knotd liblmdb.so.0.0.0 [.] 0x0000000000003706 ▒
3.72% knotd knotd [.] ns_find_branch.part.0 ▒
2.76% knotd knotd [.] trie_get_try ▒
2.63% knotd liblmdb.so.0.0.0 [.] 0x00000000000069d2 ▒
2.34% knotd libknot.so.14.0.0 [.] knot_dname_lf ▒
1.92% knotd liblmdb.so.0.0.0 [.] mdb_cursor_get ▒
1.72% knotd knotd [.] create_zonedb ▒
1.68% knotd knotd [.] twigbit.isra.0 ▒
1.68% knotd knotd [.] catalogs_generate ▒
1.36% knotd knotd [.] twigoff.isra.0 ▒
1.28% knotd knotd [.] hastwig.isra.0 ▒
1.28% knotd knotd [.] db_code ▒
1.27% knotd libknot.so.14.0.0 [.] find_item ▒
1.11% knotd libknot.so.14.0.0 [.] knot_dname_size ▒
1.04% knotd knotd [.] zonedb_reload ▒
0.99% knotd libc.so.6 [.] _int_free ▒
0.99% knotd liblmdb.so.0.0.0 [.] 0x0000000000003ce8 ▒
0.96% knotd liblmdb.so.0.0.0 [.] memcmp@plt ▒
0.95% knotd liblmdb.so.0.0.0 [.] mdb_cursor_open ▒
0.88% knotd libc.so.6 [.] malloc ▒
0.88% knotd knotd [.] conf_db_get ▒
0.87% knotd knotd [.] ns_next_leaf ▒
0.82% knotd libknot.so.14.0.0 [.] iter_set ▒
0.75% knotd knotd [.] evsched_cancel ▒
0.73% knotd libknot.so.14.0.0 [.] find ▒
...
Our config is pretty simple, conf-export looks like:
server:
rundir: "/local/knot_dns/run/"
user: "nobody"
pidfile: "/local/knot_dns/run/knot.pid"
listen: [ ... ]
log:
- target: "syslog"
any: "info"
statistics:
timer: "10"
file: "/tmpfs/knot_dns_stats.yaml"
database:
storage: "/local/knot_dns/data"
mod-stats:
- id: "default"
request-protocol: "on"
server-operation: "on"
request-bytes: "on"
response-bytes: "on"
edns-presence: "on"
flag-presence: "on"
response-code: "on"
request-edns-option: "on"
response-edns-option: "on"
reply-nodata: "on"
query-type: "on"
query-size: "on"
reply-size: "on"
template:
- id: "default"
global-module: "mod-stats/default"
storage: "/local/knot_dns/zones/"
zone:
- domain: "example.com."
template: "default"
... 478,000 more domains all the same ...
Current files on disk are:
# ls -l /local/knot_dns/data/*
/local/knot_dns/data/catalog:
total 0
/local/knot_dns/data/journal:
total 0
/local/knot_dns/data/keys:
total 0
/local/knot_dns/data/timers:
total 75880
-rw-rw---- 1 root root 77697024 Jun 24 09:26 data.mdb
-rw-rw---- 1 root root 2432 Jul 17 01:05 lock.mdb
/local/knot_dns/data/timing:
total 0
This machine is not slow or constrained in any way. It's 24 core, 3.6Ghz, 64Gb, NVMe drives, etc. Load is very low (<1) with plenty of free resources.
So what I'm wondering is:
1. Is this normal? It doesn't feel right that adding/removing a single domain takes > 2 seconds regardless of the size of the existing zone database
2. Is there any way to improve this? Doing multiple adds/deletes at once within a transaction works and we do that where we can, but there are cases where we can't do that and I'd really like to understand why this is as slow as it is.
Thanks in advance
Rob
Hello,
I'm not sure I'm posting in the right place. Don't hesitate to tell me
if it's not.
I began to test the use of Knot Resolver 6.x for a future project
(deploying a DNS resolver with blocked domains lists).
I would like to know if it's possible to split the config.yaml in
several files (the main config in one file, acl and views section in
another, data-local section with rpz lists and tags to rely acl lists to
blocklists in another), and if the answer is yes, how can I do ?
Thank you for your help.
Regards,
Stephane
I ultimately found that I needed to use a CH in KDIG but wonder how I/We could update the documentation or add an alias for chaos.
Working Example:
dig +short @<dns server> version.bind chaos txt
dig +short @<dns server> version.bind CH txt
kdig +short @<dns server> version.bind CH txt
Problem Example:
kdig +short @<dns server> version.bind chaos txt
I had to read source code to find the answer in a speedy timeframe.
Hi,
for benchmark purpose I need to find out how log it takes to sign zone
files in different sizes with different hardware. What is the best way
to find out the exact time the signing process takes?
Thanks!
BR
Thomas
Hi,
when signing a zone file I receive this error in the log:
"zone event 're-sign' failed (not enough space provided)"
Can you tell me what is the limiting factor here?
Thanks!
BR
Thomas
Hello,
Could you please clarify whether Knot can perform a zone transfer not
from the first master listed, but from the one that sent the NOTIFY? The
masters are configured in the following order:
remote:
- id: master
address: [ "192.168.58.151", "192.168.58.134" ]
When a NOTIFY is sent from |192.168.58.134|, the zone transfer is still
performed from |192.168.58.151|.
Here are the relevant log entries:
Apr 25 19:09:25 ubuntu knotd[2065]: info: [chacha.com.] notify,
incoming, remote 192.168.58.134@32888 TCP, serial 2006
Apr 25 19:09:25 ubuntu knotd[2065]: info: [chacha.com.] refresh, remote
192.168.58.151@53, remote serial 2006, zone is outdated
Apr 25 19:09:25 ubuntu knotd[2065]: info: [chacha.com.] IXFR, incoming,
remote 192.168.58.151@53 TCP, receiving AXFR-style IXFR
Apr 25 19:09:25 ubuntu knotd[2065]: info: [chacha.com.] AXFR, incoming,
remote 192.168.58.151@53 TCP, started
Thank you for your product and for your help!
*Best regards,*
*A.A. Basov*
Hi,
this happened to me for the second time, that https://dnsviz.net <https://dnsviz.net/> tells me:
| enfer-du-nord.net/CDNSKEY: The CDNSKEY RRset must be signed with a key that is represented in both the
| current DNSKEY and the current DS RRset. See RFC 7344, Sec. 4.1.
| enfer-du-nord.net/CDS: The CDS RRset must be signed with a key that is represented in both the current
| DNSKEY and the current DS RRset. See RFC 7344, Sec. 4.1.
I do not understand what that means.
#) I haven't modified my KSK for some time now
#) I did notify my parent zone about a modified list of nameservers (via registrar's web portal)
I am not absolutely sure if the latter is the cause for these error messages.
I 'fixed' that issue by re-uploading my unmodified KSK DNSKEY (via registrar's web portal).
Hmm, how can I fix that issue the right way?
Any hints are highly welcome,
Michael
Hi,
given the case that a ip6/xy block might be delegated to me by my ISP, I began investigating Knot DNS' functionality with regard to ip6.arpa.
Hereby I stumbled over the module synthrecord and do not really understand what it is used for.
From https://www.knot-dns.cz/docs/3.4/singlehtml/index.html#synthrecord-automati…
"Records are synthesized only if the query can't be satisfied from the zone."
Please excuse my ignorance, but why would/should/must one return something else than the following for hosts not in the zone?
kbn> host 2001:dead:beef::1
Host 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.f.e.e.b.d.a.e.d.1.0.0.2.ip6.arpa not found: 3(NXDOMAIN)
Any feedback is highly appreciated, thanks.
Regards,
Michael
Hi,
This is just a possible feature request. We’re planning on using Knot for user
hosted domains. To do that we’ll have to add and remove zones dynamically,
so we’ve enabled the config db.
What surprised us is that this means that the config file isn’t used at all anymore
(except you can use it to prime the config db).
As it is, we’ll have to embrace the config db, which makes our ansible playbook
more complicated. It’s easy to add a config file template in ansible, it’s more
complicated to issue `knotc conf-begin; knotc conf-set; knotc conf-commit` logic.
I wish knot was more like nsd, where you have the config file nsd.conf, but if
you add zones with `nsd-control addzone ….` it gets added to a seperate zonelist
file, which nsd reads on startup. It means we can have a static config file, but
still be able to add and delete zones dynamically.
nsd doesn’t have automatic DNSSEC key management and catalog zones in knot
are really easy to use, which is why we’re going with knot for this project. I just
wanted to lay it out there as an idea for the future :)
.einar