debian 12
# uname -a
Linux rip.psg.com 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux
# knotc --version
knotc (Knot DNS), version 3.2.6
AXFR of a 750k zone from seattle to, lebanon, europe, iceland, southern
africa, ...) fails over v5 and v6, i.e. somewhat larger rtt
same AXFR seattle seattle or seattle to ashburn works v4 and v6.
seattle to dallas v4 good, v6 fails, but it smells of an HE.net hop.
when it fails, it is always within a hundred or so mytes of the same
place, approximately 10% through the file.
pcap of seattle->beirut at https://archive.psg.com/240323.beirut.pcap
the last payload is in frame 217. at 219, seattle (b) sends a
FIN/PSH/ACK. then at 251, after acking everything, beirut (nabil)
FIN/ACKs seattle's FIN and we're dead.
i am not positive this is the key question as my tcp fu is a bit rusty.
but why did seattle send the FIN at 219, 10% through the file?
randy
in
address: [185.91.97.18, 2a05:e380:2:4::2] # nabil.beirutix.net
if this is a tab character ^
# knotc conf-check
error: config, file '/etc/knot/knot.conf', line 324, item 'address', value '2a05:e380:2:4::2' (tabulator character is not allowed)
blanks are ok
# knotc --version
knotc (Knot DNS), version 3.2.6
yes, that is the latest on debian 12
randy
server runs a tld as primary, but slds are hidden primaries which the
server pulls as a secondary, wants to sign bump-in-the-wire, and then
make available to public secondaries. i think that the doc says this is
doable, but instructions are insufficiently explicit for this idjit.
i am fetching the slds into
policy:
- id: pol-256-256
algorithm: rsasha256 # was ecdsap256sha256 sra uses ecdsap384sha384
manual: on
...
template:
- id: signed
storage: /var/lib/knot/sec-sign
dnssec-signing: on
dnssec-policy: pol-256-256
zonefile-sync: -1
zonefile-load: difference
journal-content: all
serial-policy: unixtime
...
zone:
- domain: sld.tld
file: tld.sld # sorry, i like alpha sort in `ls` :)
master: hidden-fetch
template: signed
acl: [allow-local, secondaries-push]
the policy and template are those from signing primary zone; which i
suspect is ill advised.
i did generate keying as i would when signing a primary zone
# keymgr sld.tld generate algorithm=rsasha256 ksk=yes zsk=yes
7a618eaf94ea1d903233cb547faa24bae8cb49a5
# knotc zone-reload sld.tld
OK
# keymgr sld.tld ds
sld.tld. DS 63562 8 2 2d25e465f131900413d7e8a90ad1b96c75ba835de63dfee08610b113a779d41f
sld.tld. DS 63562 8 4 ed9c31c495703ec354f1a1835c9878339224cc06ac3001151c2ebb89524b25190efa424348c999b0c4df940edffa8409
any kind soul(s) care to whack me with a clue bat?
randy
I got a report of an NSEC error from someone who tried to connect to a
mistyped hostname. I've done a bit of poking, and it looks like we're
seeing a missing wildcard NSEC for domain names that are two subdomains
down from the apex, but not for subdomains of the apex. Though, I admit I
can't see the problem myself. Querying by hand I see what looks like an
identical response, but resolvers and DNSViz report problems with the
deeper name.
For example, nonexistent.dns-oarc.net and nonexistent.sjc.dns-oarc.net (
sjc.dns-oarc.net is a real subdomain with hosts in it, not an ENT)... kdig
output and DNSViz results below.
We're running knot/unknown,now 3.3.5-cznic.1~bullseye from deb.knot-dns.cz,
and this is the relevant policy statement for the zone:
policy:
- id: ecdsa
algorithm: ecdsap256sha256
ksk-lifetime: 365d
ksk-submission: parent_zone_sbm
zsk-lifetime: 30d
rrsig-lifetime: 14d
rrsig-refresh: 7d
We are mid-KSK-roll, waiting on the DS submission check.
Have I misconfigured something here, or is there a signing bug, or is this
something else?
Thanks!
Matt
---
nonexistent.sjc.dns-oarc.net: DNSviz reports this is fine.
<https://dnsviz.net/d/nonexistent.dns-oarc.net/ZfNH1w/dnssec/>
;; ->>HEADER<<- opcode: QUERY; status: NXDOMAIN; id: 9380
;; Flags: qr aa; QUERY: 1; ANSWER: 0; AUTHORITY: 6; ADDITIONAL: 1
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: do; UDP size: 1232 B; ext-rcode: NOERROR
;; QUESTION SECTION:
;; nonexistent.dns-oarc.net. IN A
;; AUTHORITY SECTION:
dns-oarc.net. 3600 IN SOA ns1.dns-oarc.net. hostmaster.dns-oarc.net.
2024031400 300 60 604800 3600
nfsen.dns-oarc.net. 3600 IN NSEC ns.dns-oarc.net. A AAAA RRSIG NSEC
dns-oarc.net. 3600 IN NSEC fs1.10g.dns-oarc.net. A NS SOA MX TXT AAAA
RRSIG NSEC DNSKEY CDS CDNSKEY CAA
dns-oarc.net. 3600 IN RRSIG SOA 13 2 14400 20240328021935
20240314004935 6048 dns-oarc.net. [omitted]
nfsen.dns-oarc.net. 3600 IN RRSIG NSEC 13 3 3600 20240326215132
20240312202132 6048 dns-oarc.net. [omitted]
dns-oarc.net. 3600 IN RRSIG NSEC 13 2 3600 20240322045130
20240308032130 6048 dns-oarc.net. [omitted]
;; Received 518 B
;; Time 2024-03-14 18:57:33 UTC
;; From 64.191.0.128@53(UDP) in 0.3 ms
nonexistent.sjc.dns-oarc.net: resolvers and DNSViz report a missing
wildcard NSEC
<https://dnsviz.net/d/nonexistent.sjc.dns-oarc.net/ZfNH6w/dnssec/>
;; ->>HEADER<<- opcode: QUERY; status: NXDOMAIN; id: 660
;; Flags: qr aa; QUERY: 1; ANSWER: 0; AUTHORITY: 6; ADDITIONAL: 1
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: do; UDP size: 1232 B; ext-rcode: NOERROR
;; QUESTION SECTION:
;; nonexistent.sjc.dns-oarc.net. IN A
;; AUTHORITY SECTION:
dns-oarc.net. 3600 IN SOA ns1.dns-oarc.net. hostmaster.dns-oarc.net.
2024031400 300 60 604800 3600
newmail.sjc.dns-oarc.net. 3600 IN NSEC pdu-7301.sjc.dns-oarc.net. A AAAA
RRSIG NSEC
shin-cubes.dns-oarc.net. 3600 IN NSEC an1.10g.sjc.dns-oarc.net. A AAAA
RRSIG NSEC
dns-oarc.net. 3600 IN RRSIG SOA 13 2 14400 20240328021935
20240314004935 6048 dns-oarc.net. [omitted]
newmail.sjc.dns-oarc.net. 3600 IN RRSIG NSEC 13 4 3600 20240326215132
20240312202132 6048 dns-oarc.net. [omitted]
shin-cubes.dns-oarc.net. 3600 IN RRSIG NSEC 13 3 3600 20240326215132
20240312202132 6048 dns-oarc.net. [omitted]
;; Received 544 B
;; Time 2024-03-14 18:57:33 UTC
;; From 64.191.0.128@53(UDP) in 0.3 ms
Hi folks,
Is it possible to chain multiple upstream catalog zones into one downstream one?
I do have the following topology:
Multiple DNS hidden masters <-> DNS signer / DNS master for public facing slaves <-> public facing slaves
Can I define catalog zones on hidden masters and use them on public-facing signer/master to compose a catalog zone for the slaves?
Best Regards,
Martin Hunek
Freenet Liberec, z.s.
Hello,
today I had a long power outage that stopped my primary server for a
dozen hours
When I re-started the server, some zones refused to start :
Feb 12 21:55:17 arrakeen knotd[20728]: info: [geekwu.org.] DNSSEC, signing zone
Feb 12 21:55:17 arrakeen knotd[20728]: error: [geekwu.org.] zone event 're-sign' failed (invalid parameter)
the invalid parameter was there was no active KSK for these zones, as a
keymgr list show
b37b6c2[...] 39945 KSK ECDSAP384SHA384 publish=1636966605 active=1637053005
7cc8622[...] 20799 ZSK ECDSAP384SHA384 created=1706695010
After manually setting published & active status in keygmr, reloading
these zones succeeded, and knot restarted to serve them.
Do you know how these zone could has failed as this ? They run fine on
auto-sign for years now. should I monitor closely the ones that need a
re-sign tomorrow ?
Regards,
--
Bastien
Hi,
after successful migration of my hidden primary NSD and OpenDNSSEC signer to Knot DNS, I started to migrate my secondary NSDs to Knot DNS as well.
Thanks to excellent documentation this migration went more or less flawless as well.
BUT: I am somehow irritated about the following error messages at my hidden primary like:
2024-02-16T10:54:08+0100 debug: [ellael.org.] ACL, allowed, action transfer, remote 10.1.1.201@27919, key primary-secondary.
2024-02-16T10:54:08+0100 info: [ellael.org.] AXFR, outgoing, remote 10.1.1.201@27919 TCP, started, serial 2024021331
2024-02-16T10:54:08+0100 info: [ellael.org.] AXFR, outgoing, remote 10.1.1.201@27919 TCP, finished, 0.00 seconds, 1 messages, 7774 bytes
2024-02-16T10:54:09+0100 debug: [ellael.org.] ACL, allowed, action notify, remote 10.1.1.201@40884, key primary-secondary.
2024-02-16T10:54:09+0100 info: [ellael.org.] notify, incoming, remote 10.1.1.201@40884 TCP, serial 2024021331
>>>! 2024-02-16T10:54:09+0100 error: [ellael.org.] zone event 'refresh' failed (operation not supported)
The log files at both secondary are identical, here one example:
2024-02-16T10:54:08+0100 info: [ellael.org.] AXFR, incoming, remote 10.2.2.203@5333 TCP, finished, 0.00 seconds, 1 messages, 7774 bytes
2024-02-16T10:54:08+0100 info: [ellael.org.] refresh, remote 10.2.2.203@5333, zone updated, 0.03 seconds, serial none -> 2024021331,\
expires in 1209600 seconds
2024-02-16T10:54:08+0100 info: [ellael.org.] zone file updated, serial 2024021331
>>>! 2024-02-16T10:54:09+0100 info: [ellael.org.] notify, outgoing, remote 10.2.2.203@5333 TCP, serial 2024021331
FYI: Those errors are only logged when a zone gets updated or using "knotc zone-notify" at the secondary site.
Here are my essential config excerpts:
Primary:
acl:
- id: aclTRANSACTIONS
key: primary-secondary
action: [notify, transfer]
remote:
- id: secondaryKBN
key: primary-secondary
address: 10.1.1.201 # KBN secondary
via: 10.2.2.203 # outgoing interface
Secondary:
acl:
- id: aclTRANSACTIONS
key: primary-secondary
action: [notify, transfer]
remote:
- id: primaryMWN
key: primary-secondary
address: 10.2.2.203@5333 # MWN hidden primary
via: 10.2.2.201 # outgoing interface
block-notify-after-transfer: on
FYI: Only adding "block-notify-after-transfer: on" at secondary sites stopped those error messages.
I found https://www.mail-archive.com/knot-dns-users@lists.nic.cz/msg01812.html :
"I recommend not using this option unless you really know what you're doing
and why this option is essential for you."
Questions:
#) I do have to admit, I don't understand what is going on without "block-notify-after-transfer: on"?
#) Am I save in using "block-notify-after-transfer: on"?
#) Or is the another config option?
Thanks in advance and regards,
Michael
Hello fellow Knot users,
We're using Knot on some of our public authoritative servers. We operate a hidden primary configuration where there are two internal primary servers sending notifies to publicly accessible servers whenever zones change (ofc internal servers allow zone transfers & queries from the knot servers).
By design, our hidden internal primary servers operate in a hot/cold setup - one of them is answering to zone transfers / sending out notifies and the other one is not (dns server software is not running on the cold server).
This configuration causes knot to log alarming errors:
Feb 1 17:28:45 ns2 knotd[624]: info: [zone.fi.] refresh, remote 10.54.54.1@8054, remote serial 2021083015, zone is up-to-date, expires in 864000 seconds
Feb 1 17:28:45 ns2 knotd[624]: info: [zone.fi.] refresh, remote hidden-ns-2.internal, address 10.54.54.2@8054, failed (connection reset)
Feb 1 17:28:45 ns2 knotd[624]: info: [zone.fi.] refresh, remote hidden-ns-2.internal, address [ipv6_address]@8054, failed (connection reset)
Feb 1 17:28:45 ns2 knotd[624]: warning: [zone.fi.] refresh, remote hidden-ns-2.internal not usable
Feb 1 17:28:45 ns2 knotd[624]: error: [zone.fi.] refresh, failed (no usable master), next retry at 2024-02-01T17:58:45+0200
Feb 1 17:28:45 ns2 knotd[624]: error: [zone.fi.] zone event 'refresh' failed (no usable master)
Specifically, error "(no usable master)" is worrying - knot is able to reach hidden-ns-1.internal and verify that the zone it has is up-to-date. Also zone updates work normally and the zones don't seem to expire away (as zones should do if no master is reachable for an extended period of time).
Looks like this error appeared in version 3.3.0. 3.2.9 did not log similar errors (except in cases where all primaries really were unreachable). Has there been some design change in 3.3.0 (ie is this intentional) or could this be a bug? Or could it be related to our configuration?
Our configuration relies heavily on templates. These should be the important bits from knot's config, with ipv6 addresses hidden:
acl:
- id: "hidden-ns-1.internal"
address: [ 10.54.54.1, ipv6_address ]
action: notify
- id: "hidden-ns-2.internal"
address: [ 10.54.54.2, ipv6_address ]
action: notify
remote:
- id: "hidden-ns-1.internal"
address: [ 10.54.54.1@8054, ipv6_address@8054 ]
via: [ x.x.x.x, ipv6_address ]
- id: "hidden-ns-2.internal"
address: [ 10.54.54.2@8054, ipv6_address@8054 ]
via: [ x.x.x.x, ipv6_address ]
template:
- id: default
storage: /var/lib/knot/zones
master: hidden-ns-1.internal
master: hidden-ns-2.internal
acl: hidden-ns-1.internal
acl: hidden-ns-2.internal
semantic-checks: false
global-module: mod-rrl/default
zone:
- domain: zone.fi
--
Juha Suhonen
Senior Systems Specialist
CSC - Tieteen tietotekniikan keskus Oy
juha.suhonen(a)csc.fi
Hi,
I wonder if it would be possible that one may use arithmetics in knot.conf such as:
propagation-delay: 5 * dnskey-ttl
I'd like to set a propagation-delay safety net during ZSK rotations depending on SOA TTL set for any given zone.
As dnskey-ttl defaults to zone SOA TTL that would allow for propagation-delay definitions as multiples of SOA TTLs.
As I couldn't find that in the documentation, I do assume that this cannot be done, right?
Are there alternatives at hand I overlooked?
Regards,
Michael