I am definitely interested in examples!
Reading up on groups, is it that 'group A' may represent 'customer A'
and have a specific set of Primary/Master nameservers, Group B ==
Customer B, different Primary, and so on?
(also fixed to be plain text - hope this is more legible in the archives)
--Chris
Hello DNS people,
I am exploring migrating from PowerDNS where we have a hidden primary (ns0)
and two public resolvers (ns1/ns2) using SQL replication, to instead use
Knot DNS for ns1/ns2 and Catalog zones to update them. ns0 would remain
Powerdns (frontend, zone edits for customers, etc). We are looking at
changing due to performance issues - "dns water torture" or "random
subdomain attacks" or whatever we're calling this these days.
Our test environment is more or less setup as listed here:
* https://nick.bouwhuis.net/posts/2024-12-31-catalog-zones-powerdns-knot/
This is similar to the architectured listed here:
*
https://indico.dns-oarc.net/event/47/contributions/1008/attachments/963/185…
(Klaus from nic.at)
For some zones, we're secondary to a customer's zone. In this case the
Primariy IPs are listed in PowerDNS metadata. I am trying to wrap my head
around how this could work seamlessly, where we keep the same workflow -
add the zone to PowerDNS, then it gets replicated with catalog zone to
ns1/ns2 (knot). Does anyone have this working? Secondary is mentioned in
the PDF above but no details about that are listed.
The issues appear to be at least these two things:
1) How to tell ns1/ns2 (knot) which IP's are its primaries in these zones?
The only thing I can think of is a separate script to generate a knot
config file with this info - effectively the same as "back in the day" with
BIND. This completely negates the function of catalog zones that are
secondaries. rfc9432 does address this:
"Catalog zones on secondary name servers would have to be set up manually,
perhaps as static configuration, similar to how ordinary DNS zones are
configured when catalog zones or another automatic configuration mechanism
are not in place. "
That RFC then says you still have to keep it in the catalog anyhow - it's
not immediately clear to me how/why - and how it could be configured per
the lasts sentence (manually in knot conf) as well as in the catalog -
wouldn't this be two declarations of the same zone?
"Additionally, the secondary needs to be configured as a catalog consumer
for the catalog zone to enable processing of the member zones in the
catalog, such as automatic synchronization of the member zones for
secondary service"
2) How would NOTIFY work? our hidden ns0 (powerdns) runs a copy of the
zones, but ns1/ns2 would be notified from the actual primary, and our ns0
would become out of date. Does knot have something like also-notify to
always notify that server? This may or may not be a problem, but the zone
data would completely become stale without this. Some customers log into
our web portal to view records of their secondaries and expect them to
match.
If anyone has operational experience with this or just a big cluebat to hit
me with - let me know.
Cheers,
Chris
Hi
We're noticing that as our list of zones gets larger (about 480k right now), adding a new zone or deleting an existing zone seems to continue to get slower. We are always doing our modifications as part of a transaction, and the time appears to occur in the commit phase.
An example timing.
# time /opt/knot/sbin/knotc ... conf-begin
OK
real 0m0.010s
user 0m0.000s
sys 0m0.010s
# time /opt/knot/sbin/knotc ... conf-unset zone.domain example.com
OK
real 0m0.010s
user 0m0.000s
sys 0m0.010s
# time /opt/knot/sbin/knotc ... conf-commit
OK
real 0m2.330s
user 0m0.000s
sys 0m0.009s
#
As you can see, it took > 2 seconds to commit the transaction that removes just the example.com zone. Similarly, it takes > 2 seconds to commit the transaction that adds the zone back.
Given the time is real time and not sys/user, I presume knotc is waiting on knotd to complete the work. I used perf to record a CPU profile of knotd while the commit was running, but nothing hugely stuck out at me.
10.75% knotd libc.so.6 [.] __memcmp_avx2_movbe ◆
6.03% knotd knotd [.] __popcountdi2 ▒
5.89% knotd knotd [.] ns_first_leaf ▒
5.25% knotd libc.so.6 [.] pthread_mutex_lock@@GLIBC_2.2.5 ▒
3.85% knotd liblmdb.so.0.0.0 [.] 0x0000000000003706 ▒
3.72% knotd knotd [.] ns_find_branch.part.0 ▒
2.76% knotd knotd [.] trie_get_try ▒
2.63% knotd liblmdb.so.0.0.0 [.] 0x00000000000069d2 ▒
2.34% knotd libknot.so.14.0.0 [.] knot_dname_lf ▒
1.92% knotd liblmdb.so.0.0.0 [.] mdb_cursor_get ▒
1.72% knotd knotd [.] create_zonedb ▒
1.68% knotd knotd [.] twigbit.isra.0 ▒
1.68% knotd knotd [.] catalogs_generate ▒
1.36% knotd knotd [.] twigoff.isra.0 ▒
1.28% knotd knotd [.] hastwig.isra.0 ▒
1.28% knotd knotd [.] db_code ▒
1.27% knotd libknot.so.14.0.0 [.] find_item ▒
1.11% knotd libknot.so.14.0.0 [.] knot_dname_size ▒
1.04% knotd knotd [.] zonedb_reload ▒
0.99% knotd libc.so.6 [.] _int_free ▒
0.99% knotd liblmdb.so.0.0.0 [.] 0x0000000000003ce8 ▒
0.96% knotd liblmdb.so.0.0.0 [.] memcmp@plt ▒
0.95% knotd liblmdb.so.0.0.0 [.] mdb_cursor_open ▒
0.88% knotd libc.so.6 [.] malloc ▒
0.88% knotd knotd [.] conf_db_get ▒
0.87% knotd knotd [.] ns_next_leaf ▒
0.82% knotd libknot.so.14.0.0 [.] iter_set ▒
0.75% knotd knotd [.] evsched_cancel ▒
0.73% knotd libknot.so.14.0.0 [.] find ▒
...
Our config is pretty simple, conf-export looks like:
server:
rundir: "/local/knot_dns/run/"
user: "nobody"
pidfile: "/local/knot_dns/run/knot.pid"
listen: [ ... ]
log:
- target: "syslog"
any: "info"
statistics:
timer: "10"
file: "/tmpfs/knot_dns_stats.yaml"
database:
storage: "/local/knot_dns/data"
mod-stats:
- id: "default"
request-protocol: "on"
server-operation: "on"
request-bytes: "on"
response-bytes: "on"
edns-presence: "on"
flag-presence: "on"
response-code: "on"
request-edns-option: "on"
response-edns-option: "on"
reply-nodata: "on"
query-type: "on"
query-size: "on"
reply-size: "on"
template:
- id: "default"
global-module: "mod-stats/default"
storage: "/local/knot_dns/zones/"
zone:
- domain: "example.com."
template: "default"
... 478,000 more domains all the same ...
Current files on disk are:
# ls -l /local/knot_dns/data/*
/local/knot_dns/data/catalog:
total 0
/local/knot_dns/data/journal:
total 0
/local/knot_dns/data/keys:
total 0
/local/knot_dns/data/timers:
total 75880
-rw-rw---- 1 root root 77697024 Jun 24 09:26 data.mdb
-rw-rw---- 1 root root 2432 Jul 17 01:05 lock.mdb
/local/knot_dns/data/timing:
total 0
This machine is not slow or constrained in any way. It's 24 core, 3.6Ghz, 64Gb, NVMe drives, etc. Load is very low (<1) with plenty of free resources.
So what I'm wondering is:
1. Is this normal? It doesn't feel right that adding/removing a single domain takes > 2 seconds regardless of the size of the existing zone database
2. Is there any way to improve this? Doing multiple adds/deletes at once within a transaction works and we do that where we can, but there are cases where we can't do that and I'd really like to understand why this is as slow as it is.
Thanks in advance
Rob
Hello,
I'm not sure I'm posting in the right place. Don't hesitate to tell me
if it's not.
I began to test the use of Knot Resolver 6.x for a future project
(deploying a DNS resolver with blocked domains lists).
I would like to know if it's possible to split the config.yaml in
several files (the main config in one file, acl and views section in
another, data-local section with rpz lists and tags to rely acl lists to
blocklists in another), and if the answer is yes, how can I do ?
Thank you for your help.
Regards,
Stephane
I ultimately found that I needed to use a CH in KDIG but wonder how I/We could update the documentation or add an alias for chaos.
Working Example:
dig +short @<dns server> version.bind chaos txt
dig +short @<dns server> version.bind CH txt
kdig +short @<dns server> version.bind CH txt
Problem Example:
kdig +short @<dns server> version.bind chaos txt
I had to read source code to find the answer in a speedy timeframe.
Hi,
for benchmark purpose I need to find out how log it takes to sign zone
files in different sizes with different hardware. What is the best way
to find out the exact time the signing process takes?
Thanks!
BR
Thomas
Hi,
when signing a zone file I receive this error in the log:
"zone event 're-sign' failed (not enough space provided)"
Can you tell me what is the limiting factor here?
Thanks!
BR
Thomas
Hello,
Could you please clarify whether Knot can perform a zone transfer not
from the first master listed, but from the one that sent the NOTIFY? The
masters are configured in the following order:
remote:
- id: master
address: [ "192.168.58.151", "192.168.58.134" ]
When a NOTIFY is sent from |192.168.58.134|, the zone transfer is still
performed from |192.168.58.151|.
Here are the relevant log entries:
Apr 25 19:09:25 ubuntu knotd[2065]: info: [chacha.com.] notify,
incoming, remote 192.168.58.134@32888 TCP, serial 2006
Apr 25 19:09:25 ubuntu knotd[2065]: info: [chacha.com.] refresh, remote
192.168.58.151@53, remote serial 2006, zone is outdated
Apr 25 19:09:25 ubuntu knotd[2065]: info: [chacha.com.] IXFR, incoming,
remote 192.168.58.151@53 TCP, receiving AXFR-style IXFR
Apr 25 19:09:25 ubuntu knotd[2065]: info: [chacha.com.] AXFR, incoming,
remote 192.168.58.151@53 TCP, started
Thank you for your product and for your help!
*Best regards,*
*A.A. Basov*