Dear all,
I am deploying a DNS system using the Knot DNS software.
I have read in the document and I did not see any DNS query log.
So let me ask DNS Knot software can collect DNS query log? If possible,
what is the configuration?
Best regards,
Chinhlk.
Hi,
I performed a manual key roll over with this command:
$ knotc zone-key-rollover dnssec-test.xxx zsk
The result is 2 different ZSK's with the same key id:
dnssec-test.xxx. 3600 IN DNSKEY 256 3 8 (
AwEAAc5W.....
) ; ZSK; alg = RSASHA256; key id = 7030
dnssec-test.xxx. 3600 IN DNSKEY 256 3 8 (
AwEAAc7Q5U......
) ; ZSK; alg = RSASHA256; key id = 7030
>From the log:
2020-07-03T14:52:59 info: [dnssec-test.xxx.] DNSSEC, key, tag 56464,
algorithm RSASHA256, KSK, public, ready, active+
2020-07-03T14:52:59 info: [dnssec-test.xxx.] DNSSEC, key, tag 7030,
algorithm RSASHA256, public
2020-07-03T14:52:59 info: [dnssec-test.xxx.] DNSSEC, key, tag 7030,
algorithm RSASHA256, public, active
2020-07-03T14:52:59 info: [dnssec-test.xxx.] DNSSEC, signing started
2020-07-03T14:52:59 info: [dnssec-test.xxx.] DNSSEC, zone is up-to-date
Is it the indented behavior to have two ZSK's with the same key id?
Thanks a lot,
Thomas
Hi all,
we'd like to inform you about recently found bug in Knot DNS 2.9.x.
The bug affects automatic key roll-overs when automatic key management
is configured
https://www.knot-dns.cz/docs/2.9/singlehtml/index.html#automatic-dnssec-sig…
The ZSK, CSK or algorithm roll-over might be finished too early, so that
DNSKEY and RRSIG records in resolvers' caches get out of sync, leading
to temporary DNSSEC validation failure.
Affected versions are Knot DNS 2.9.0 -- 2.9.4.
We will release fixing version 2.9.5 soon.
In the meantime, we recommend to apply the workaround: set the
configuration option zone-max-ttl
https://www.knot-dns.cz/docs/2.9/singlehtml/index.html#zone-max-ttl
explicitly to a value greater or equal to maximal TTL among all records
in the zone. (Remove the workaround once upgraded to fixed version.)
Many thanks to Anand Buddhdev from RIPE NCC for finding this bug.
Caring regards,
Libor Peltan
CZ.NIC
We're consolidating servers, and as a result. I need to transfer
some the IPs, zones && keys to other servers.
I'm trying to find the least eventful way to accomplish this.
I attempted to transfer (signed) zones and IPs once before. But
the slaves wouldn't accept the zones and I ultimately had to
purge the journals on all the servers I had control of, and
re-key and re-sign the zones to make everything work as intended.
All the zones are written/kept on disk (except changes that haven't
already been flushed to disk from the DB).
I'm wondering if it's enough to freeze all the zones on their
current serves. Then shutdown the server(s), and transfer the
zones, keys, and merge the configs onto the new servers would
be the correct way to do this? If not. Please advise.
Thank you for all your time, and consideration.
--Chris
Hi,
I've setup knot to AXFR 24 zones from our MS domain controllers but
three zones fail mysteriously. After enabling debug logging
knot adds additional line with the reason - 'trailing data'.
In the pcap file created with tcpdump I see that our server starts to
send TCP resets in the middle of the transfer.
Each time resets are sent after downloading approximately the same
amount of data, this amount differ for each zone (81kb for one, 49kb for
the second).
I am able to download the zone with dig or kdig. We also have a
different server with powerdns which is able to download the zones
without problems.
I've also tried to setup different server (with bind9) serving one of
the zones and have knot to download it from there. It worked just fine.
I can send the log file snippet (with zone name and ip addresses) as
well as the pcap file off the record.
Could you, please, help me with solving this problem?
Thank you,
Petr Baloun
Je dobré vědět, že tento e-mail a přílohy jsou důvěrné. Pokud spolu jednáme o uzavření obchodu, vyhrazujeme si právo naše jednání kdykoli ukončit. Pro fanoušky právní mluvy - vylučujeme tím ustanovení občanského zákoníku o předsmluvní odpovědnosti. Pravidla o tom, kdo u nás a jak vystupuje za společnost a kdo může co a jak podepsat naleznete zde<https://onas.seznam.cz/cz/podpisovy-rad-cz.html>
You should know that this e-mail and its attachments are confidential. If we are negotiating on the conclusion of a transaction, we reserve the right to terminate the negotiations at any time. For fans of legalese—we hereby exclude the provisions of the Civil Code on pre-contractual liability. The rules about who and how may act for the company and what are the signing procedures can be found here<https://onas.seznam.cz/cz/podpisovy-rad-cz.html>.
Hi,
I noticed that when using e.g the GeoIP module to return a CNAME, that
CNAME does not get resolved, even if it is within the instances
authority. That forces the client/recursor to issue another request.
Perusing the code for a while, I noticed that there is a rather simple
way to achieve this: the GeoIP module, if the result is a CNAME, _could_
set `qdata->name` to the CNAME target and return
`KNOT_IN_STATE_FOLLOW` instead of `KNOT_IN_STATE_HIT`.
While that does seem to work, I am not so sure if it might constitute an
abuse of interfaces. Returning `KNOT_IN_STATE_FOLLOW` seems legit, but I
wasn't so sure about modifying `qdata` (specifically, in the context of
a module).
As such, I would be interested if this has ever come up before, what
possible approaches might look like, what you think of the above one,
and if tackling this problem is of interest at all. If I have a better
understanding and there is a good approach to this, I'd be happy to
submit a patch.
Thanks a lot,
Conrad
Hello,
OS: CentOS 7.6
Knot: knot-2.9.3-1.el7.x86_64
I am trying to run knot DNS using maxmind GeoIP and it works well.
Now I also want it to be able to do weighted on those answers.
E.g. If country US, it will provide 3 answers and those will be weighted.
Is that correct and possible?
Thanks
Avinash
Dear KNOT team,
We're now considering an update on our zone signer and KNOT DNS seems to be a good choice.
I looked through your documentation and do not see the performace data about signing big zones.
When the zone has about 10 millions entries , how long will it take for KNOT DNS to sign it completely?
I'll appreciate it if you can give me a feedback.
Best Regards!
Gao
Hi,
I try to use knot 2.9.2 on a Raspiberry Pi 2 with 1Gb of RAM under Arch Linux.
When I enable DNSSec (ecdsap256sha256 algorithm), I got
mars 01 16:09:05 exegol knotd[3438]: error: [aeris.eu.org.] DNSSEC, failed to
initialize (not enough memory)
mars 01 16:09:05 exegol knotd[3438]: error: [aeris.eu.org.] zone event 'load'
failed (not enough memory)
mars 01 16:09:05 exegol knotd[3438]: error: [xxx.eu.org.] DNSSEC, failed to
initialize (not enough memory)
mars 01 16:09:05 exegol knotd[3438]: error: [xxx.eu.org.] zone event 'load'
failed (not enough memory)
mars 01 16:09:05 exegol knotd[3438]: error: [imirhil.fr.] DNSSEC, failed to
initialize (not enough memory)
mars 01 16:09:05 exegol knotd[3438]: error: [imirhil.fr.] zone event 'load'
failed (not enough memory)
Those 3 zones are quite damned small (286, 19 & 16 lines), 2 containing only
minimalistic entries (1 SOA, 2 NS, 3 CAA, 2 CNAME & 1 TLSA).
I try to activate zones one at a time, loading the 16 lines zone is ok but
loading the 19 lines one too generate the same "not enough memory" for both
zones. Loading only the 19 lines zone is ok.
I don't understand how DNSSec requires so many memory to not be able to load
such small zones.
When zones are loaded without DNSSec, knot memory is under 80MB.
How I can trace the memory usage of Knot to understand why this behavior?
Is there any way to restrict Knot memory to be able to load such zones?
Regards,
--
aeris
Individual crypto-terrorist group self-radicalized on the digital Internet
https://imirhil.fr/
Protect your privacy, encrypt your communications
GPG : EFB74277 ECE4E222
OTR : 5769616D 2D3DAC72
https://café-vie-privée.fr/
Hello,
I have a problem with a new, small IPv6 reverse zone
It's a small zone, configured like this :
zone:
- domain: "a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa."
file: "a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa"
notify: "corrin"
acl: "acl_corrin"
dnssec-signing: "off"
module: mod-synthrecord/revovh2
There is data in the journal :
root@arrakeen:/var/lib/knot/external# knotc zone-read
a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa
[a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa.]
a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa. 2560 SOA ns.geekwu.org.
hostmaster.a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa. 1580234956 16384
2048 1048576 2560
[a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa.]
a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa. 259200 NS ns.geekwu.org.
[a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa.]
a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa. 259200 NS ns4.geekwu.org.
[a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa.]
0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa.
3600 PTR imperium.geekwu.org.
[a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa.]
2.2.1.0.0.0.0.0.0.0.0.0.1.0.0.0.a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa.
3600 PTR git.geekwu.org.
[a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa.]
a.2.a.2.0.0.0.0.0.0.0.0.1.0.0.0.a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa.
3600 PTR corrin.geekwu.org.
But if I change anything (like adding another host), on knotc reload I
get this error :
knotd[21892]: error: [a.8.1.a.8.0.0.0.0.d.1.4.1.0.0.2.ip6.arpa.] zone
event 'load' failed (malformed data)
I've purged the zone with zone-purge, replaced the zone file with a copy
of the dump, and got a new record on reload, but further edits fails the
same way
I'm using knot 2.9.2-1~cz.nic~buster1 for deb.knot-dns.cz/knot-latest
Do you have an idea on what's going on ?
Thanks,
--
Bastien Durel
Hi all,
I migrated from bind to knot. Everything works fine, except rrl.
I "translated" the bind config
rate-limit {
responses-per-second 5;
window 5;
};
to
mod-rrl:
- id: default
rate-limit: 5
slip: 2 # Every other response slips
template:
- id: default
storage: "/etc/knot/zones"
timer-db: "/var/lib/knot/timers"
semantic-checks: on
global-module: mod-rrl/default
But the limiting doesn't work. I testet with
for i in {1..20}; \
do dig @ns +short +tries=1 +time=1 mydomain.de a; \
done
And got 20 answers quickly.
Any ideas what's wrong here?
(I'm unsing ver 2.7.6 @Debian 10)
Best regards,
Thomas.
Hello,
I'm new to that list. Using NSD + DNSSEC + key rotation for many years.
Now I like to check if and how KNOT's auto keyrotaton can safe me from my ugly script foo...
https://lists.nic.cz/pipermail/knot-dns-users/2019-November/001721.html
JP Mens mention "I'm rolling the KSK every five minutes for testing"
instead I reinvent the wheel: could one post the relevant settings?
Thanks,
Andreas
After several successful attempts using the exact same configuration and
steps mentioned in early question here
<https://lists.nic.cz/pipermail/knot-dns-users/2020-January/001744.html> or
here <https://gist.github.com/azzamsa/24dca2e201c3bea4f489a09f7a3a8716>
(with better preview)
I get stuck with no usable master
The master side
Jan 12 07:35:12 knot-master-1.novalocal knotd[12007]: info:
[namadomain14.com.] zone file updated, serial 2018070410
Jan 12 07:35:12 knot-master-1.novalocal knotd[12007]: warning:
[namadomain14.com.] notify, outgoing, remote slaveip@53, s...TAUTH'
Jan 12 07:37:18 knot-master-1.novalocal knotd[12007]: info:
[namadomain14.com.] control, received command 'zone-begin'
Jan 12 07:37:32 knot-master-1.novalocal knotd[12007]: info:
[namadomain14.com.] control, received command 'zone-set'
Jan 12 07:37:37 knot-master-1.novalocal knotd[12007]: info:
[namadomain14.com.] control, received command 'zone-commit'
Jan 12 07:37:37 knot-master-1.novalocal knotd[12007]: info:
[namadomain14.com.] zone file updated, serial 2018070410 -> 2018070412
Jan 12 07:37:37 knot-master-1.novalocal knotd[12007]: warning:
[namadomain14.com.] notify, outgoing, remote slaveip@53, s...TAUTH'
Jan 12 07:37:41 knot-master-1.novalocal knotd[12007]: info: control,
received command 'zone-read'
Jan 12 07:38:11 knot-master-1.novalocal knotd[12007]: info:
[namadomain14.com.] control, received command 'zone-notify'
Jan 12 07:38:11 knot-master-1.novalocal knotd[12007]: warning:
[namadomain14.com.] notify, outgoing, remote slaveip@53, s...TAUTH'
The slave side
Jan 12 07:34:54 knot-slave-1 knotd: info: control, received command 'conf-read'
Jan 12 07:35:19 knot-slave-1 systemd-logind: Removed session 2.
Jan 12 07:35:20 knot-slave-1 knotd: info: control, received command 'conf-begin'
Jan 12 07:35:20 knot-slave-1 knotd: notice: control, persistent
configuration database not available
Jan 12 07:35:20 knot-slave-1 knotd: info: control, received command 'conf-set'
Jan 12 07:35:20 knot-slave-1 knotd: info: control, received command 'conf-set'
Jan 12 07:35:20 knot-slave-1 knotd: info: control, received command 'conf-set'
Jan 12 07:35:20 knot-slave-1 knotd: info: control, received command
'conf-commit'
Jan 12 07:35:20 knot-slave-1 knotd: info: [namadomain14.com.] zone
will be loaded
Jan 12 07:35:20 knot-slave-1 knotd: info: [namadomain14.com.] failed
to parse zone file (not exists)
Jan 12 07:35:21 knot-slave-1 knotd: warning: [namadomain14.com.]
refresh, remote master1 not usable
Jan 12 07:35:21 knot-slave-1 knotd: error: [namadomain14.com.]
refresh, failed (no usable master)
Jan 12 07:35:23 knot-slave-1 knotd: warning: [namadomain14.com.]
refresh, remote master1 not usable
Jan 12 07:35:23 knot-slave-1 knotd: error: [namadomain14.com.]
refresh, failed (no usable master)
Jan 12 07:35:27 knot-slave-1 knotd: info: control, received command 'zone-read'
Jan 12 07:35:29 knot-slave-1 knotd: info: control, received command 'zone-read'
Jan 12 07:35:34 knot-slave-1 knotd: warning: [namadomain14.com.]
refresh, remote master1 not usable
Jan 12 07:35:34 knot-slave-1 knotd: error: [namadomain14.com.]
refresh, failed (no usable master)
Jan 12 07:36:31 knot-slave-1 knotd: warning: [namadomain14.com.]
refresh, remote master1 not usable
Jan 12 07:36:31 knot-slave-1 knotd: error: [namadomain14.com.]
refresh, failed (no usable master)
Jan 12 07:37:14 knot-slave-1 knotd: info: [namadomain14.com.] control,
received command 'zone-begin'
Jan 12 07:37:55 knot-slave-1 knotd: info: [namadomain14.com.] control,
received command 'zone-abort'
Jan 12 07:38:19 knot-slave-1 knotd: info: control, received command 'zone-read'
Jan 12 07:39:12 knot-slave-1 knotd: warning: [namadomain14.com.]
refresh, remote master1 not usable
Jan 12 07:39:12 knot-slave-1 knotd: error: [namadomain14.com.]
refresh, failed (no usable master)
Problems
Previously, even though I have ‘NO AUTH’ in master side. The zone in slave
still crated, because I don’t get no usable master.
But the sudden there is no usable master. I’ve tried:
- sudo service knot restart
- remove everything -> reinstall everything
Both give me no luck.
Thank you in advance.
Good day,
is there some tool to migrate configuration from 1.6.5 to actual
version ? Keys, configuration, ...
Thanks and best regards
J.Karliak
--
Ma domena pouziva zabezpeceni a kontrolu SPF (www.openspf.org) a
DomainKeys/DKIM (s ADSP) a implementaci DMARC. Pokud mate problemy s
dorucenim emailu, zacnete pouzivat metody overeni puvody emailu
zminene vyse. Dekuji.
My domain use SPF (www.openspf.org) and DomainKeys/DKIM (with ADSP)
policy and implementation of the DMARC. If you've problem with sending
emails to me, start using email origin methods mentioned above. Thank
you.
Setting up slave zone (slave DNS server)
I’ve asked the previous question Setting up slave zone (slave DNS server)
<https://gitlab.labs.nic.cz/knot/knot-dns/issues/667>.
And I’ve followed Libor Peltan’s advice to also configure the zone in the
slave
side. But It still didn’t work for me.
Config
knot.conf in *master* server
# This is a sample of a minimal configuration file for Knot DNS.
# See knot.conf(5) or refer to the server documentation.
server:
rundir: "/run/knot"
user: knot:knot
listen: [ 127.0.0.1@53, ::1@53 ]
log:
- target: syslog
any: info
database:
storage: "/var/lib/knot"
remote:
- id: slave1
address: 111.11.11.111@53
acl:
- id: slave1_acl
address: 111.11.11.111
action: transfer
template:
- id: default
storage: "/var/lib/knot"
file: "%s.zone"
zone:
# # Master zone
# - domain: example.com
# notify: slave
# acl: acl_slave
# # Slave zone
# - domain: example.net
# master: master
# acl: acl_master
knot.conf in my *slave* server
# This is a sample of a minimal configuration file for Knot DNS.
# See knot.conf(5) or refer to the server documentation.
server:
rundir: "/run/knot"
user: knot:knot
listen: [ 127.0.0.1@53, ::1@53 ]
log:
- target: syslog
any: info
database:
storage: "/var/lib/knot"
remote:
- id: master1
address: 222.22.22.222@53
acl:
- id: master1_acl
address: 222.22.22.2222
action: notify
template:
- id: default
storage: "/var/lib/knot"
file: "%s.zone"
zone:
# # Master zone
# - domain: example.com
# notify: slave
# acl: acl_slave
# # Slave zone
# - domain: example.net
# master: master
# acl: acl_master
conf-read result
conf-read in *master* server
[root@knot-master-1 centos]# knotc conf-read
server.rundir = /run/knot
server.user = knot:knot
server.listen = 127.0.0.1@53 ::1@53
log.target = syslog
log[syslog].any = info
database.storage = /var/lib/knotacl.id = slave1_acl
acl[slave1_acl].address = 222.22.22.222
acl[slave1_acl].action = transferremote.id = slave1
remote[slave1].address = 222.22.22.222(a)53template.id = default
template[default].storage = /var/lib/knot
template[default].file = %s.zone
zone.domain = namadomain.com.
zone[namadomain.com.].file = namadomain.com.zone
zone[namadomain.com.].notify = slave1
zone[namadomain.com.].acl = slave1_acl
conf-read in *slave* server
[root@knot-slave-1 centos]# knotc conf-read
server.rundir = /run/knot
server.user = knot:knot
server.listen = 127.0.0.1@53 ::1@53
log.target = syslog
log[syslog].any = info
database.storage = /var/lib/knotacl.id = master1_acl
acl[master1_acl].address = 111.11.11.111
acl[master1_acl].action = notifyremote.id = master1
remote[master1].address = 111.11.11.111(a)53template.id = default
template[default].storage = /var/lib/knot
template[default].file = %s.zone
zone.domain = namadomain.com.
zone[namadomain.com.].master = master1
zone[namadomain.com.].acl = master1_acl
Zone Read
zone-read in *master* server
[root@knot-master-1 centos]# knotc zone-read --
[namadomain.com.] namadomain.com. 86400 TXT "hello"
[namadomain.com.] namadomain.com. 86400 SOA ns1.biz.net.id.
hostmaster.biz.net.id. 2018070411 3600 3600 604800 38400
zone-read in *slave* server
[root@knot-slave-1 centos]# knotc zone-read --
[namadomain.com.] namadomain.com. 86400 SOA ns1.biz.net.id.
hostmaster.biz.net.id. 2018070410 3600 3600 604800 38400
Steps I use to create a zone
in *master* server
knotc conf-begin
knotc conf-set 'zone[namadomain.com]'
knotc conf-set 'zone[namadomain.com].file' 'namadomain.com.zone'
knotc conf-set 'zone[namadomain.com].notify' 'slave1'
knotc conf-set 'zone[namadomain.com].acl' 'slave1_acl'
knotc conf-commit
knotc zone-begin namadomain.com
knotc zone-set namadomain.com. @ 86400 SOA ns1.biz.net.id.
hostmaster.biz.net.id. 2018070410 3600 3600 604800 38400
knotc zone-set namadomain.com. @ 86400 TXT "hello"
knotc zone-commit namadomain.com
in *slave* server
knotc conf-begin
knotc conf-set 'zone[namadomain.com]'
knotc conf-set 'zone[namadomain.com].master' 'master1'
knotc conf-set 'zone[namadomain.com].acl' 'master1_acl'
knotc conf-commit
knotc zone-begin namadomain.com
knotc zone-set namadomain.com. @ 86400 SOA ns1.biz.net.id.
hostmaster.biz.net.id. 2018070410 3600 3600 604800 38400
knotc zone-commit namadomain.com
Problems
If we look closely. I’ve crated the configuration of namadomain.com in
*both* master and slave servers. Also I’ve created the SOA record of of
namadomain.com in *both* master and slave servers. But I only create file
config in *master* server and TXT record in *master* server (to test if
AXFR zone transfer worked).
Unfortunately, the file config and the TXT record is not created by slave,
even though I’ve waited for more than hour (1 day actually). Am I missing
something here? (I never put the zone directly in zone: section of
knot.conf,
I always use knotc since I will use libknot control.py to manage zones with
our
app <https://github.com/BiznetGIO/RESTKnot>)
Also am I able to see if the knot in master emit the transfer ‘signal’ and
check
if knot in slave receive that signal? So It will make me easier to debug.
I’ve tried to trigger knotc zone-notify namadomain.com in *master* side,
and knotc zone-retransfer namadomain.com in *slave* side. But nothing
changed.
[root@knot-master-1 centos]# knotc zone-notify namadomain.com
OK
[root@knot-master-1 centos]# knotc zone-read --
[namadomain.com.] namadomain.com. 86400 TXT "hello"
[namadomain.com.] namadomain.com. 86400 SOA ns1.biz.net.id.
hostmaster.biz.net.id. 2018070411 3600 3600 604800 38400
[root@knot-slave-1 centos]# knotc zone-retransfer namadomain.com
OK
[root@knot-slave-1 centos]# knotc zone-read --
[namadomain.com.] namadomain.com. 86400 SOA ns1.biz.net.id.
hostmaster.biz.net.id. 2018070410 3600 3600 604800 38400
Machine
# knotc --version
knotc (Knot DNS), version 2.9.1
OS: CentOS 7.5
Thank you in advance.
Hi,
Today I migrated my knot from FreeBSD to Gentoo (because it take too
much time to stay on a supported release of FreeBSD)
I rsynced my knot.conf (and changed the paths) and /var/db/knot to
/var/lib/knot
However, daemon failed to start because it wasn’t able to bind to
/var/run/knot/knot.sock, and the permissions where good. I had to remove
/var/db/knot and rsync only zones and keys.
I don’t get the link from files in /var/lib and a denied permission on
/var/run/knot/knot.sock, so I think that there is a bug here.
Regards,
--
Alarig
Dear all,
I have 2 servers and one of them I installed dnsblast.
I read many time your DNS-benchmarking
<https://gitlab.labs.nic.cz/knot/dns-benchmarking> project and get it from
gitlab but I can't send packet more than 500K.
please help.
Best Regards.
Hello!
after reading and rereading the documentation (release 2.9) section on
automatic KSK management, and rereading it again, I finally understood
the part which says "the user shall propagate [the DS] to the parent".
In particular due to the log entry
info: DS check, outgoing, remote 127.0.0.1@53, KSK submission attempt: negative
and the phrasing of the "submission:" configuration stanza, I thought
Knot would attempt to do so itself via dynamic update. I think I was
injecting too much wishful thinking into the text. :)
Now to my two questions:
Is it envisioned to have Knot launch an executable in order to perform
the submission? I'm thinking along the lines of Knot running at every
`check-interval':
./ds-submitter zone "<cds>" "<cdnskey>"
upon which the ds-submitter program could (e.g. via RFC2136) add DS
RRset to the parent zone. Might be nice to have ... (I did see the bit
about journald and using that to trigger DS submission, but using
journald frightens me a bit.)
I notice that a dynamic update on 2.9 logs
info: [zone.] DDNS, processing 1 updates
is there any way to get more details logged (what the update actually
was)? My configuration contains:
log:
- target: syslog
server: debug
control: debug
zone: debug
any: debug
Thank you.
-JP
Hello,
Is KASP directory sharing possible between different knot instances ?
(I have a "public" instance and a "private" one, with internal
addresses, using different storage: directories, but the same kasp-db:)
The internal one sometimes returns invalid DNSSEC data for non-existant
names until I restart it. Is it to be expected in this setup ?
Thanks,
--
Bastien
Hello.
I found another strange behaviour on my slave : it kept old RRSIG's for
SOA entries.
I fixed it by running zone-purge, then zone-retransfer
; <<>> DiG 9.11.5-P1-1~bpo9+1-Debian <<>> +dnssec @87.98.180.13 caa
www.geekwu.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12546
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 19, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 1232
;; QUESTION SECTION:
;www.geekwu.org. IN CAA
;; AUTHORITY SECTION:
geekwu.org. 86400 IN SOA ns.geekwu.org.
hostmaster.geekwu.org. 2019101730 3600 1200 2419200 10000
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181118054714 20181104041714 54076 geekwu.org.
8b6lzlyI1fZUxwtCt9GHsRu1Ist1CtDFm+NifSTTESoMK3XAnW8gyZzp
UIEmNiIRKdYnze+FsW+oEw4hm9pkW/lwgIls3tBvLKCwRseUwrj5jIbh rPK9fYuWM0RP1HBj
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181118054719 20181104041719 54076 geekwu.org.
uoRZZ4jV2jaubsH7W3VIpIdu9iVKYnz9q+GpNHSnEDv4Mt5JcVnLChLk
/eeRKSK9U7h8yFSOmxdcTBIyITlbGGBeeVbZFdQYsSshpN1Fa7YME9JU ttr5tISHoPnHlM2y
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181118054723 20181104041723 54076 geekwu.org.
/2EY2DNqeX0GqK0eDcg3dFIqgH0346fP1XuIHM3oswRMnFMtCEDuD325
jpqByKlOkl4Edlv/GyJlJXohrdlWyTzE2xCM6ad2ordYHu0eCO8npfEi OOPc2HhtBYQUM+TU
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181118054727 20181104041727 54076 geekwu.org.
OwfeorMdvR3Q4XvkSfNxruYJfcPRPIhsnDrrYk41L1gIYMRwEfapO/Te
tza5M07CGx0rQPvFejXHRr7vzlatxvI9k9Vqrs12CR1q7ak+NVhxcM53 syCJVN/z8iKu5uYV
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181118054731 20181104041731 54076 geekwu.org.
DpyZ6MCWuEA025ghGRJBISd0Lp7qXWb3R27+R/+FWnIytoLrIIzgRLI5
TEojx/k6hlbwFszH024T5CP5vQ1WXjIVyF9ZwNjseJ+skZgPx5ySzqrI R8WXgdpKYi4+SrYs
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181118054738 20181104041738 54076 geekwu.org.
nNr2vNwLQSlDE7UXQWxGCiTTKb3wyx5VSzhoB8W8ovkSD9EHbcAr3QuP
cR3ICASDhnWySB4NEB7i+qncT90+JEccuxvT8fBCoMHJtMy0YjNqsTLd tOOSitLogl4h8the
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181118054741 20181104041741 54076 geekwu.org.
/28OcE9l7MXN9wKl2vpI5fxNpu6Ia1i4ku3V61Pix2JPKKJ4YhYG1BDw
aBnpncgso71CiJyVz1Rsp1X50V6tGdibtP/ckRdqvl9f+W+KeCvcrcaS 8u4a5BjeqDFokrfY
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181118054743 20181104041743 54076 geekwu.org.
GA4eFm1u4jxa+Jhu4vWa+UoucY7OES/8bKxkaIMte0AsC7bPovWdS8Qx
6zrrS9u1PEXS4dDy+PeCxPQFSefaPfiEY44J8JILTaf4OfiL+ij7RI2m MrN1e428veGFFfrY
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181118054746 20181104041746 54076 geekwu.org.
MnF6nqK/UV+Q68Lf6mV80E4FcClEARc9gclqH4jP0gaxco/DpqGhWEgG
yKM/E8ePIPOaUyRqdoiRKfWS2xcQExhqbeS+orcUjKx04BxDbp8rpCFG lRhR+QLEO4SpTrKo
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181118054749 20181104041749 54076 geekwu.org.
MygQqAnclUn//WxI3gi04Fjkcim/7C4rusz0pj0RXOwl5yAQZMj5gFl7
v5WtUdxbysOhGiiJeHHpWWepD46E2DIlpAp/vKC8hw/DGNRSk6gT6cWl UqylhEIgKrekqVb4
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181125063021 20181111050021 54076 geekwu.org.
h1vHar/cdzpfTbVPhP6UW11aW6pB+1sfagKMBIPDif8mDjvLOP1KsTvI
LuAUUNHaQgcYaoYwc9MrSQ+/se6CweK80B+O7pk+GFSWfqygyHhEvHZt YOHX183eKtYyjFix
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181125063028 20181111050028 54076 geekwu.org.
hDTZuOOKCCWEYlWApA/g+RdSOKxfEGmttto8/BwUNYGBs/2dHziAIQlD
y4SQh2ST1crUeHOTL7d8o+naqbt2lT7pMNqrUQy6XXYdVZ4gQ6aIjVkG Yi7kRgdG8Nksm31N
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20181213181717 20181129164717 63974 geekwu.org.
6BspV1velFjzJQqmGejOSyV6A9NOg3n486/mAx4llB4d+S1KF3j7dzzX
TmMalQN2QufP7NDCVaV4kmtoOgUwS/XcfAQeDJ/b5/bYNa1ERf/tV6bJ 3Z4by5PlXdqoPte5
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20190113191717 20181230174717 21289 geekwu.org.
2+WLMBHmrc2hygRoGtZbtgxikv6aHRv5xDg07nKldKKk/oR9l2GtjpUt
yaMtgu+x/5Uk02eCwea+Vs41wq7BfzcZlL5SNqud9vFqCWAH3izikfLL mngKO3HHBvzxmqgy
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20190213201717 20190130184717 63065 geekwu.org.
8xL2eX4atI6Z+CxwZaF4cwGmNKXY389Qsnz9qZZkXdGcnmzYct5UVBl1
LM0bR/e3D5ZCAhdFR8NynhUKPwCKaSgivjzKNmLunqDFUJgM/4fZFAP8 pXce50jpYXVMBmhm
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20190316211717 20190302194717 19071 geekwu.org.
FdHJKayUYfLsXu9ct9Mfvo1nVRei8xExluWbOfD9wbe/ZDqFQKyBvWKr
hMrdUvSgCQ52iaTWl88cdPtho8YgGXRC+qse5rzqK+9toKbFryg/jFAX 18xJSYrntilYJm1u
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20190416221717 20190402204717 14519 geekwu.org.
kIBe1/zzEbX09gKk0R6MbAicxfoGjL1ojWvR/8oQBdld1yVtyMtLWKic
NhkellgCaXIb7cluj/SAUQC2lr9tJxZp31oG+DhQBAG2arQAVtphxJ0p yUwD7klvUweJM49Y
geekwu.org. 86400 IN RRSIG SOA 14 2 86400
20191109090028 20191026073028 47945 geekwu.org.
9tmNQt48ZT3aTV/WkcfPpLQ3/3rpKXaDlasT9+EnRyniLrLuuHgQtRfw
uxvJpillbGVn15uy9aoMDADV9K5DfdaNOgskK1v3QYgMpGtao+soydi/ Y9MyfDFUQNmSUIKD
Therefore, let's encrypt checks failed when querying it with
"detail": "DNS problem: SERVFAIL looking up CAA for arrakeen.geekwu.org"
; I guess their validating resolver did not picked the correct RRSIG,
and failed to validate.
If you want to investigate, I have another zone which have the same
problem, and a bit more time before certificate expiration, for which I
can provide details, journal, etc.
Regards,
--
Bastien Durel
Hello,
I am using knot version 2.7 with automated keyrollover for DNSSEC.
I am trying to understand the rollover process so that I can exactly
predict when a rollover is in progress.
So far I have understood this:
KSK (1) [created and published] ---(progapagtion delay + DNSKEY TTL)-->
KSK (1) [ready]
---(submitted in parent zone)-->
---((KSK-TTL) - (DNSKEY TTL + propagation delay + DS parent TTL + x))->
KSK (2) [created and published]
KSK (2) [ready] ---(submitted in parent zone)-->
KSK (1) [retire-active], KSK (2) [active] ---(propagation delay + DNSKEY
TTL)->
KSK (1) [removed]
My observartions during the rollover:
Knot configuration:
DNSKEY-TTL: 12h
Propagation-delay: 2h
Zone TTL default: 1h
KSK-lifetime: 3d
ZSK lifetime: 0
Parent configuration:
DS-TTL on parent: 1h
KSK (1) [created and published] ---(progapagtion delay + DNSKEY TTL)-->
KSK (1) [ready]
---(submitted in parent zone)-->
---((3d) - (12h + 2h + 1h + x))->
KSK (2) [created and published]
KSK (2) [ready] ---(submitted in parent zone)-->
KSK (1) [retire-active], KSK (2) [active] ---(2h + 12h)->
KSK (1) [removed]
The new key was created 19 hours before the KSK-lifetime ends. When I
substract DNSKEY TTL, propagation delay and DS parent TTL its still 4
hours too early. Somebody knows why?
References:
https://www.knot-dns.cz/docs/2.7/html/reference.htmlhttps://tools.ietf.org/html/rfc6781.html#section-4.1.2
Thanks and Cheers,
Sakirnth
So problem is solved for me, I've tried to start with just one zone which didn’t helped, but removing journal did. Many thanks to Daniel Salzman for kind help, maybe he will post some other details after analyzing our journal records.
--
Lukas Kocourek
> On 2019-10-15, at 12:27, knot-dns-users-request(a)lists.nic.cz wrote:
>
> Message: 5
> Date: Tue, 15 Oct 2019 11:56:26 +0200
> From: Lukáš Kocourek <kocour(a)direcat.net>
> To: knot-dns-users(a)lists.nic.cz
> Subject: [knot-dns-users] 2.9.0 fails to start after upgrade
> Message-ID: <110451DC-3CC1-419D-9066-EF784FDDAC00(a)direcat.net>
> Content-Type: text/plain; charset=utf-8
>
> Hi,
>
> I’ve upgraded from 2.8.4 to 2.9.0 (Ubuntu 18.04, using ppa:cz.nic-labs/knot-dns-latest) and now knotd fail to start.
> After dispatching "service knot start" command knotd process immediately goes to 100% CPU and after 3 minutes it’s killed by systemd because "Start operation timed out”.
>
> Here log from service start...
>
> Oct 15 11:37:04 ns1-u18 knotc[4972]: Configuration is valid
> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: Knot DNS 2.9.0 starting
> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: loaded configuration file '/etc/knot/knot.conf'
> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: using reuseport for UDP
> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: binding to interface X.X.X.X@53
> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: binding to interface ::@53
> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: loading 199 zones
>
> [cut] - info that all zones will be loaded
> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: [xxx.yyy.] zone will be loaded
> [/cut]
>
> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: starting server
> Oct 15 11:38:34 ns1-u18 systemd[1]: knot.service: Start operation timed out. Terminating.
> Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: State 'stop-sigterm' timed out. Killing.
> Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: Killing process 4984 (knotd) with signal SIGKILL.
> Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: Main process exited, code=killed, status=9/KILL
> Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: Failed with result 'timeout'.
>
> Any advice what to do?
>
> Thank very much
>
> --
> Lukas Kocourek
>
>
>
> ------------------------------
>
> Message: 6
> Date: Tue, 15 Oct 2019 12:27:34 +0200
> From: Daniel Salzman <daniel.salzman(a)nic.cz>
> To: knot-dns-users(a)lists.nic.cz
> Subject: Re: [knot-dns-users] 2.9.0 fails to start after upgrade
> Message-ID: <99813b76-c350-442f-35c7-2447cb07acce(a)nic.cz>
> Content-Type: text/plain; charset=utf-8
>
> Hi Lukáš,
>
> Could you try to temporarily disable some zones? We have no idea at the moment :-(
>
> Would it be possible to show me a snippet of your configuration?
>
> Daniel
>
> On 10/15/19 11:56 AM, Lukáš Kocourek wrote:
>> Hi,
>>
>> I’ve upgraded from 2.8.4 to 2.9.0 (Ubuntu 18.04, using ppa:cz.nic-labs/knot-dns-latest) and now knotd fail to start.
>> After dispatching "service knot start" command knotd process immediately goes to 100% CPU and after 3 minutes it’s killed by systemd because "Start operation timed out”.
>>
>> Here log from service start...
>>
>> Oct 15 11:37:04 ns1-u18 knotc[4972]: Configuration is valid
>> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: Knot DNS 2.9.0 starting
>> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: loaded configuration file '/etc/knot/knot.conf'
>> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: using reuseport for UDP
>> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: binding to interface X.X.X.X@53
>> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: binding to interface ::@53
>> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: loading 199 zones
>>
>> [cut] - info that all zones will be loaded
>> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: [xxx.yyy.] zone will be loaded
>> [/cut]
>>
>> Oct 15 11:37:04 ns1-u18 knotd[4984]: info: starting server
>> Oct 15 11:38:34 ns1-u18 systemd[1]: knot.service: Start operation timed out. Terminating.
>> Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: State 'stop-sigterm' timed out. Killing.
>> Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: Killing process 4984 (knotd) with signal SIGKILL.
>> Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: Main process exited, code=killed, status=9/KILL
>> Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: Failed with result 'timeout'.
>>
>> Any advice what to do?
>>
>> Thank very much
>>
>> --
>> Lukas Kocourek
>>
Hi,
I’ve upgraded from 2.8.4 to 2.9.0 (Ubuntu 18.04, using ppa:cz.nic-labs/knot-dns-latest) and now knotd fail to start.
After dispatching "service knot start" command knotd process immediately goes to 100% CPU and after 3 minutes it’s killed by systemd because "Start operation timed out”.
Here log from service start...
Oct 15 11:37:04 ns1-u18 knotc[4972]: Configuration is valid
Oct 15 11:37:04 ns1-u18 knotd[4984]: info: Knot DNS 2.9.0 starting
Oct 15 11:37:04 ns1-u18 knotd[4984]: info: loaded configuration file '/etc/knot/knot.conf'
Oct 15 11:37:04 ns1-u18 knotd[4984]: info: using reuseport for UDP
Oct 15 11:37:04 ns1-u18 knotd[4984]: info: binding to interface X.X.X.X@53
Oct 15 11:37:04 ns1-u18 knotd[4984]: info: binding to interface ::@53
Oct 15 11:37:04 ns1-u18 knotd[4984]: info: loading 199 zones
[cut] - info that all zones will be loaded
Oct 15 11:37:04 ns1-u18 knotd[4984]: info: [xxx.yyy.] zone will be loaded
[/cut]
Oct 15 11:37:04 ns1-u18 knotd[4984]: info: starting server
Oct 15 11:38:34 ns1-u18 systemd[1]: knot.service: Start operation timed out. Terminating.
Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: State 'stop-sigterm' timed out. Killing.
Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: Killing process 4984 (knotd) with signal SIGKILL.
Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: Main process exited, code=killed, status=9/KILL
Oct 15 11:40:04 ns1-u18 systemd[1]: knot.service: Failed with result 'timeout'.
Any advice what to do?
Thank very much
--
Lukas Kocourek
Hi. Firstly, a big thank-you for knotd.
I've been writing some code for automatic configuration of DNS
delegations, including dnssec.
It's been very straightforward with knot. The code is very clean, and
the documentation extensive.
Now my request:
Your control programs seem to assume that the user is running knotc or
keymgr and generating commands interactively or via a script.
Would it be possible to expose all of the knotc and the keymgr API
functions into a separate library, like libknotd-ctrl, plus a set of
header files?
At the moment I'm forking off external shell scripts from my program,
and that works fine, but if I could call and link these control
functions direct from my own code it would be great. It's probably
possible already with some automake magic, but it's not easy for someone
of my limited skills level.
The code already exists, it'd just be some additional build steps AFAICS.
--
regards,
RayH
<https://www.postbox-inc.com/?utm_source=email&utm_medium=siglink&utm_campai…>
Hiya,
trying to get our knot dns server (FreeBSD pkg, 2.8.1) to export
prometheus statistics.
I found https://github.com/ghedo/knot_exporter, which looks promising
(and I found that knot-resolver has this all built in, but this doesn't
help me :-/ ).
So, I've installed the required python modules, and when I start
the knot_exporter (with the right paths and everything) it complains
to me:
$ /tmp/knot_exporter --web-listen-port 4041 --knot-library-path ...
Traceback (most recent call last):
File "/tmp/knot_exporter", line 489, in <module>
args.knot_socket_timeout,
File "/usr/local/lib/python3.6/site-packages/prometheus_client/registry.py", line 24, in register
names = self._get_names(collector)
File "/usr/local/lib/python3.6/site-packages/prometheus_client/registry.py", line 64, in _get_names
for metric in desc_func():
File "/tmp/knot_exporter", line 434, in collect
for zone, zone_data in zone_stats["zone"].items():
KeyError: 'zone'
so, I read the manual, and it says that this should be sufficient:
template:
- id: default
storage: /usr/local/etc/dnsmgmt/data/CA/knot
disable-any: true
acl: [ acl_transfer, acl_notify ]
global-module: mod-stats
with or without
mod-stats:
- id: default
request-protocol: on
server-operation: on
edns-presence: on
flag-presence: on
response-code: on
reply-nodata: on
query-type: on
... but it does not seem to make a difference.
So - if one of you has a working knot-server knot.conf that exports data
to prometheus, can you share it?
thanks,
Gert Doering
-- NetMaster
--
have you enabled IPv6 on something today...?
SpaceNet AG Vorstand: Sebastian v. Bomhard, Michael Emmer
Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: A. Grundner-Culemann
D-80807 Muenchen HRB: 136055 (AG Muenchen)
Tel: +49 (0)89/32356-444 USt-IdNr.: DE813185279
I am trying to add wildcard static hints to catch all local domains like
below. But does not seem to work.
hints['nextcloud.local'] = '127.0.0.1' # This works fine
hints['*.local'] = '127.0.0.1'
hints['.local'] = '127.0.0.1'
DNSMasq supports this like https://stackoverflow.com/a/22551303
Is there way to do this in knot?
Thanks,
Bala
Hi,
we're using knot as a bump-in-the-wire DNSSEC Signer. The setup is as
follows:
BIND9(unsigned) -> AXFR -> knot(signing) -> AXFR -> BIND9(signed)
The zone starts out with a low serial like 10 or 11. knot has a
serial-policy: unixtime for the zones.
Problem is, whenever an update is pushed the serial number is
decreased again from unixtime back to the original serial which
prevents the zone from propagating to the slaves.
Example (test zone):
template:
- id: slave-dnssec-ecdsap256
storage: "/var/lib/knot/slave"
file: "%s.zone"
zonefile-load: difference
dnssec-signing: on
dnssec-policy: ecdsap256
master: ns1_signer
notify: ns1
acl: acl_ns1
zone:
- domain: xn--78jubwhb.xn--q9jyb4c
template: slave-dnssec-ecdsap256
serial-policy: unixtime
Here is an example where first a manual "zone-sign" is done to update
the serial to current unixtime (12 -> 1559298292) and after that the
zone is transferred in again which results in a serial decrease
(1559298292 -> 13).
[xn--78jubwhb.xn--q9jyb4c.] control, received command 'zone-sign'
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, dropping previous signatures, re-signing zone
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, key, tag 49852, algorithm ECDSAP256SHA256, KSK, public, active
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, key, tag 55142, algorithm ECDSAP256SHA256, public, active
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, signing started
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, successfully signed
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, next signing at 2019-06-07T12:24:52
[xn--78jubwhb.xn--q9jyb4c.] zone file updated, serial 12 -> 1559298292
[xn--78jubwhb.xn--q9jyb4c.] notify, outgoing, remote 176.9.75.248@53, serial 1559298292
[xn--78jubwhb.xn--q9jyb4c.] AXFR, outgoing, remote 176.9.75.248@60025, started, serial 1559298292
[xn--78jubwhb.xn--q9jyb4c.] AXFR, outgoing, remote 176.9.75.248@60025, finished, 0.00 seconds, 1 messages, 1819 bytes
[xn--78jubwhb.xn--q9jyb4c.] notify, incoming, remote 176.9.75.248@9104, received, serial 13
[xn--78jubwhb.xn--q9jyb4c.] refresh, remote 176.9.75.248@53, remote serial 13, zone is outdated
[xn--78jubwhb.xn--q9jyb4c.] IXFR, incoming, remote 176.9.75.248@53, receiving AXFR-style IXFR
[xn--78jubwhb.xn--q9jyb4c.] AXFR, incoming, remote 176.9.75.248@53, starting
[xn--78jubwhb.xn--q9jyb4c.] AXFR, incoming, remote 176.9.75.248@53, finished, 0.00 seconds, 1 messages, 321 bytes
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, key, tag 49852, algorithm ECDSAP256SHA256, KSK, public, active
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, key, tag 55142, algorithm ECDSAP256SHA256, public, active
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, signing started
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, successfully signed
[xn--78jubwhb.xn--q9jyb4c.] DNSSEC, next signing at 2019-06-07T12:25:21
[xn--78jubwhb.xn--q9jyb4c.] refresh, remote 176.9.75.248@53, zone updated, 0.10 seconds, serial 12 -> 13
[xn--78jubwhb.xn--q9jyb4c.] zone file updated, serial 1559298292 -> 13
[xn--78jubwhb.xn--q9jyb4c.] notify, outgoing, remote 176.9.75.248@53, serial 13
How to prevent this? We want knot to always use the current unixtime
for the zone.
Best Regards
Sebastian
--
GPG Key: 0x58A2D94A93A0B9CE (F4F6 B1A3 866B 26E9 450A 9D82 58A2 D94A 93A0 B9CE)
'Are you Death?' ... IT'S THE SCYTHE, ISN'T IT? PEOPLE ALWAYS NOTICE THE SCYTHE.
-- Terry Pratchett, The Fifth Elephant
Dobrý den,
zkouším rozjet Knot DNS, ovšem narazil jsem na problém - knotd mi
neposlouchá na UDP portu. Upozornil mě na to nástroj http://dnsviz.net.
Poradí mi někdo prosím, co by mohlo být špatně a jak z toho ven?
Mám podezření na modul *noudp*, ovšem marně se snažím dohledat nějaké
podrobnější informace, popisující jak vůbec moduly fungují a jak se s
nimi pracuje. Knot jsem instaloval z repositáře
https://deb.knot-dns.cz/knot-latest.
knotd (Knot DNS), version 2.8.1; Debian 9
Děkuji.
--
S pozdravem
Ondřej Budín
Hi there,
while trying to understand the algorithm employed in the
`find_best_view` function in the geoip module, I started wondering
whether this line is in there intentionally:
https://gitlab.labs.nic.cz/knot/knot-dns/blob/4015475b0d3e11c0bd6fcac8aceb6…
I am still trying to understand how this works with the actual geo data,
but here is a test case using the subnet mode that yields slightly
surprising results:
Using a geoip config like this for zone example.com:
bar.example.com:
- net: 127.0.0.0/8
A: 9.9.9.9
- net: 192.0.0.0/8
A: 1.1.1.1
- net: 192.168.0.0/16
A: 4.4.4.4
- net: 192.168.1.0/24
A: 8.8.8.8
If I query bar.example.com from 192.168.1.X, I get 4.4.4.4, which is
suprising because it is neither the most nor the least specific item.
The binary search returns the most specific one (8.8.8.8), which is sort
of what I would expect. However, above line immediately takes the `prev`
item without checking for `view_strictly_in_view`. Without the above
line, the whole function returns the most specific item, as I would expect.
Please note that this is mostly an intuition atm, as I have not yet had
the time to set up a similar test case for real geo data (which uses the
same algorithm). But I figured that someone more familiar with the code
might have enough context to tell whether this is correct or not or what
a suitable fix might look like.
Thanks a bunch,
Conrad
Hi,
I have a quick update about the upstream package repositories in
Open Build Service (OBS) for Knot DNS.
New repositories
----------------
- CentOS_7_EPEL - aarch64
- Fedora_30 - x86_64, armv7l, aarch64
- xUbuntu_19.04 - x86_64
- Debian_Next - x86_64 (Debian unstable rolling release)
New repositories for Debian 10 and CentOS 8 should be available shortly
after these distros are released, depending on their buildroot
availability in OBS.
Deprecated repositories
-----------------------
- Arch - x86_64
Due to many issues with Arch packaging in OBS (invalid package size,
incorrect signatures) and the fast pace of Arch updates, please consider
this repository deprecated in favor of the knot package in Arch
Community repo [1]. The Arch OBS repository will most likely be removed
in the future.
Also, please note I'll be periodically deleting repositories for distros
that reach their official end of life. In the coming months, this
concerns Ubuntu 18.10 and Fedora 28.
[1] - https://www.archlinux.org/packages/community/x86_64/knot/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello Knot team
I have just noticed that there are no more Debian packages for Debian 8 (Jessie) at https://deb.knot-dns.cz/knot/dists/. Debian 8 is in LTS until June 30, 2020 (see https://wiki.debian.org/LTS). Is the dropping of packages for Jessie intended or not? (I plan to migrate this summer, but for now it would be good to still get security fixes for Knot.)
Might be related to the PGP revocation?
Danilo
I have to integrate Knot in framework which will be testing for the
presence of a running instance of the Knot daemon by means of 'knotc
status'. This works fine, once knotd has loaded all the zones. However,
when launching it there will be a window of opportunity during which 'knotc
status' will fail, despite the fact that knotd is already running, but
still loading its zones.
Two questions:
1. Is this a correct description, or am I missing something here?
2. If it is right, is there a way, with the concourse of KNOT tools, to
find out whether knotd is already running or still loading zones?
I guess that, if it is a matter of just verifying that knotd is running,
using the 'ps' command would be simpler. I wonder, however, whether it can
be done with KNOT tools alone.
Hi there!
At work, we make use something often referred to as "weighted records",
a feature offered by many managed DNS vendors. We use it to implement
e.g. a canary environment, where we test changes on a small portion of
production traffic.
I played around with knot-dns for a bit (quite impressed), and figured
it might be nice to implement such a feature for it. Turns out, the code
is so amazing that if you abuse the infrastructure of the geoip module,
it only takes an hour (very impressed).
If you are interested in trying it, read on further below, but just to
state my intention: I was wondering if such a feature would be of
interest at all?
I realize the geoip module may not be the appropriate for such an
implementation, consider this merely a demo of sorts.
So here goes nothing:
1. apply attached patch
2. config file snippet:
mod-geoip:
- id: test
config-file: /etc/knot/test.conf
ttl: 600
mode: weighted
zone:
- domain: example.com.
file: "/var/lib/knot/example.com.zone"
module: mod-geoip/test
3. /etc/knot/test.conf:
lb.example.com:
- weight: 10
CNAME: www1.example.com.
- weight: 5
CNAME: www2.example.com.
Results in this:
conrad@deltree ~/hack/knot-dns $ for i in $(seq 1 100); do dig
@192.168.1.242 A lb.example.com +short; done | sort | uniq -c
68 www1.example.com.
32 www2.example.com.
conrad@deltree ~/hack/knot-dns $ for i in $(seq 1 100); do dig
@192.168.1.242 A lb.example.com +short; done | sort | uniq -c
72 www1.example.com.
28 www2.example.com.
conrad@deltree ~/hack/knot-dns $ for i in $(seq 1 100); do dig
@192.168.1.242 A lb.example.com +short; done | sort | uniq -c
66 www1.example.com.
34 www2.example.com.
You get the idea. Anyways, any thoughts and feedback would be greatly
appreciated.
Thanks a lot,
Conrad
Hi @all
I was trying to migrate knot to a new server with Ubuntu 18.04 and knot
2.8.0. I managed this by just copying the content of /var/lib/knot/*
from the old to the new server.
One of my domains threw an error when starting knot:
knotd[5186]: error: [<zone>] zone event 'load' failed (malformed data)
keymgr throws the same error. I dumped the data out of the 2.7.6
installation with mdb_dump and imported it into the 2.8.0 installation,
which did not help.
I then uninstalled 2.8.0 on the new server and installed 2.7.6, migrated
data from the old server with the same procedure and it worked, no errors.
Just out of curiosity I upgraded then the new server from 2.7.6 to 2.8.0
to see what will happen: It was the exact same behaviour, the zone
couldn't load.
How can I proceed here? There is obviously something wrong that the
update breaks the loading of the zone. Is this on my end or a bug in knot?
Thanks
Simon
I perfoms benchmarks with knot-dns as a authoritative server and dnsperf
as a workload client. Knot server has 32 cores. Interrupts from 10Gb
network card are spreaded across all 32 cores. Knot configured with
64 udp-workers. Each knot thread assigned to one core. So there are at
least two knot threads assigned to one core. Then i start dnsperf with
command
./dnsperf -s 10.0.0.4 -d out -n 20 -c 103 -T 64 -t 500 -S 1 -q 1000 -D
htop on knot server shows 3-4 cores completly unused. Then i restart
dnsperf unused cores are changes.
That is the reason for unused core?
When we try to use the root zone file to load to verify the UDP or TCP
perfermance of knot
the result TCP performance is greater than UDP performance
--
Best Regards!!
champion_xie
Hi!
We've been experimenting with backups and disaster recovery in our knot
test setup and have been running into a weird issue.
Basically our backup strategy right now is to perform incremental
backups of the /var/lib/knot and the /etc/knot directories via rsync.
When we try to restore these backups knot starts successfully, but logs
the following messages for each of the zones that are currently in a
signed template:
2019-03-08T11:43:05 info: [example.com.] DNSSEC, signing zone
2019-03-08T11:43:05 error: [example.com.] zone event 'DNSSEC re-sign'
failed (invalid parameter)
When we try to query information about these zones via dig we receive a
SERVFAIL rcode for them.
All of the zones that are not processed through the DNSSEC mechanism are
unaffected by this.
We also experienced th same behavior, when we were experimenting with
adding new zones that are signed immediately.
To workaround this problem we currently add the zone in an unsigned
state (aka default template) to knot and after that we switch the
template of the zone to "signed".
This works like a charm for new zones and can also be used to recover
each of the broken zones after restoring the backup, but we'd rather not
use this workaround during disaster recovery as it would impose the
danger of breaking the zones if it is not performed correctly.
The templates and policies in our knot.conf look like this right now:
policy:
- id: shared
algorithm: RSASHA256
ksk-size: 2048
zsk-size: 1024
zsk-lifetime: 1d
ksk-lifetime: 2d
ksk-shared: true
ksk-submission: resolver
nsec3: true
cds-cdnskey-publish: always
template:
- id: default
storage: "/var/lib/knot"
semantic-checks: on
global-module: mod-stats
master: primary
notify: secondaries
acl: [primary, secondaries]
serial-policy: unixtime
dnssec-signing: off
- id: signed
dnssec-signing: on
dnssec-policy: shared
master: primary
notify: secondaries
acl: [primary, secondaries]
serial-policy: unixtime
zone:
- domain: example.com
template: signed
Thanks,
Thomas
Hi,
we are testing knot at the moment and are struggling with the concept of
automatic KSK rollovers. We are planning to fetch the current CDS for
further automatic processing and try to figure out how to achieve a
policy that performs KSK rollovers in a short period of time for testing
purposes. Until now we have not seen any KSK rollovers and we are not
sure why this is not happening. Can someone show me an example of a
policy that schedules a KSK rollover within a very short period, let's
say once a day? What policy parameters must be set? And what must be the
zone's parameters in terms of TTL and SOA values (expire, refresh, etc)?
Thanks a lot,
Thomas
Ahoj,
I'm tinkering with running knot-resolver for DNS-over-TLS (only). My kresd@1
is listening on the public interface by using a drop-in override file,
following kresd.systemd(7) (kresd-tls.socket.d/override.conf).
Thing is, I would like knot to not listen on port 53 on any interface, not
even localhost. But this is precisely what it does.
I naively tried to stop it from doing so, first with a
kresd.socket.d/override.conf with:
[Socket]
ListenStream=
But that failed with journalctl -u kresd.socket containing `kresd.socket:
Unit has no Listen setting (ListenStream=, ListenDatagram=, ListenFIFO=,
...). Refusing.`
And also by trying to disable that "socket-unit", with a
kresd(a)1.service.d/override.conf containing:
[Service]
Sockets=
Sockets=kresd-tls.socket
Sockets=kresd-control(a)%i.socket
But that did nothing.
Finally, using `systemctl mask kresd.socket` I get it to stop listening on
(localhost) port 53. But then instead systemd find itself in "degraded"
mode...
Any tip on how to accomplish this cleanly?
Hello Knot DNS users.
Our deb package signing key may have been compromised and has been
revoked. Open Build Service repositories are not affected.
The affected systems (https://deb.knot-dns.cz/knot-latest and
https://deb.knot-dns.cz/knot) now contain a new Debian repository. The
packages were re-built and re-signed with a new GPG key
rsa4096/0x8A0EFB02C84B1E9B.
We thank to security researcher calling himself "p3n73st3r" who notified
us earlier today.
Regards
Your Knot DNS team
Hello community,
can I somehow convert stored certificates for a signed zone to BIND format?
My use case is to change used topology for authoritative servers. I´m manage
existing zones in Knot, now I would like to transfer it to BIND and use
existing certificates for signing it on BIND due to DS records in parent
zones. The knot will be reconfigured as a slave.
Is it possible to achieve it?
Thanks.
--
Smil Milan Jeskyňka Kazatel
Hello, community,
could someone more describe the On-slave signing on both sides - slave and
master in the case where the master server runs on bind and slave is Knot
DNS?
I would like to achieve signing for "hidden master" configuration.
I found in Knot DNS documentation:
***
It is possible to enable automatic DNSSEC zone signing even on a slave
server. If enabled, the zone is signed after every AXFR/IXFR transfer from
master, so that the slave always serves a signed up-to-date version of the
zone.
It is strongly recommended to block any outside access to the master server,
so that only the slave’s signed version of the zone is served.
Enabled on-slave signing introduces events when the slave zone changes while
the master zone remains unchanged, such as a key rollover or refreshing of
RRSIG records, which cause inequality of zone SOA serial between master and
slave. The slave server handles this by saving the master’s SOA serial in a
special variable inside KASP DB and appropriately modifiying AXFR/IXFR
queries/answers to keep the communication with master consistent while
applying the changes with a different serial.
It is recommended to use UNIX time serial policy on master and incremental
serial policy on slave so that their SOA serials are equal most of the time.
***
Thanks for any advice.
Regards,
kaza
Hello all,
we want to migrate to a state of the art nameserver software.
Our startingpoint is djbdns/tinydns.
Our first step should be to to use zone transfer from tinydns to knot-dns
(2.7.6).
I configure the knot-dns a slave:
# knotc conf-read
…
acl.id = master
acl[master].address = 5.28.40.220
acl[master].action = notify
remote.id = master
remote[master].address = 5.28.40.220
…
zone.domain = vtnx.net.
zone[vtnx.net.].master = master
zone[vtnx.net.].acl = master
and add a initial soa rr for that domain:
vtnx.net. 0 SOA ns1.vtnx.net. hostmaster.vtnx.net. 1 16384 2048 1048576 2560
This the exact soa of the running vtnx.net domain, but a diffrent serial.
After triggering the notification from the master, i got this logging:
Jan 24 07:53:19 ns1-neu knotd[26299]: info: [vtnx.net.] notify, incoming, 5.28.40.220@38668: received, serial none
Jan 24 07:53:19 ns1-neu knotd[26299]: info: [vtnx.net.] refresh, outgoing, 5.28.40.220@53: remote serial 1548307450, zone is outdated
Jan 24 07:53:19 ns1-neu knotd[26299]: warning: [vtnx.net.] IXFR, incoming, 5.28.40.220@53: malformed response SOA
Jan 24 07:53:19 ns1-neu knotd[26299]: warning: [vtnx.net.] refresh, remote master not usable
Jan 24 07:53:19 ns1-neu knotd[26299]: error: [vtnx.net.] refresh, failed (no usable master)
What about "malformed response SOA"?
Why is this an IXFR and no AXFR?
- Frank
--
Frank Matthieß Mail: frank.matthiess(a)virtion.de
GnuPG: 9F81 BD57 C898 6059 86AA 0E9B 6B23 DE93 01BB 63D1
virtion GmbH Stapenhorster Straße 42b, DE 33615 Bielefeld
Geschäftsführer: Michael Kutzner
Handelsregister HRB 40374, Amtsgericht Bielefeld, USt-IdNr.: DE278312983
Hi,
I want to use Ansible to deploy zone files to my Knot signer (hidden
master). The zone files should be generated from the Ansible playbook
data and will not contain any DNSSEC related information, just SOA, NS,
A, AAAA, TXT and MX records. I'd like to use Knot DNSSEC auto-signing. I
can stop the Knot process before deploying new zone files. I use
zonefile-load: difference in this case, as of the DNSKEY / CDNSKEY / CDS
data should not be replaced with something new. Should this work for me,
or is there anything I miss or is there even a better option?
Kind regards,
Volker
Hi,
I'm working on a registry for +31 ENUM, using Knot DNS 2.6.8. The
intention is to trigger the Python API from PostgreSQL database views.
The postgres user, though added to the knot group and granted rw- on all
of /var/db/knot/* and the knotd socket, cannot do thinkgs like conf-read
through the Python API or knotc.
This is where root and the postgres user diverge:
84630 knotc CALL
open(0x7fffffffe100,0x100022<O_RDWR|O_EXLOCK|O_CLOEXEC>)
84630 knotc NAMI "/tmp/SEMDMDBrXFzK!_#un)"
84630 knotc RET open 6
84630 knotc CALL fstat(0x6,0x7fffffffe068)
84630 knotc STRU struct stat {dev=4261341516, ino=125942,
mode=0100660, nlink=1, uid=0, gid=0, rdev=4294967295,
atime=1545172776.416579000, mtime=1545182138.348328000,
ctime=1545182138.348328000, birthtime=1545172776.416478000, size=16,
blksize=4096, blocks=2, flags=0x800 }
That's root. uid=0 and gid=0 for the /tmp/SENDMDB... file. But now:
84649 knotc CALL
open(0x7fffffffe0f0,0x100022<O_RDWR|O_EXLOCK|O_CLOEXEC>)
84649 knotc NAMI "/tmp/SEMDMDBrXFzK!_#un)"
84649 knotc RET open -1 errno 13 Permission denied
That's user postgres, even though it is in the knot group. It seems to
see the file but have no access, probably due to uid=0, gid=0.
Note that matching name.
--> What is this file it is trying to open, and is it always mapped to
uid=0,gid=0, even if the user running "knotc conf-read" is not root?
Could this be a FreeBSD things, or a jail thing?
Any advise is welcome!
Thanks!
-Rick
I don't think the Makefile is wrong.
If you call knotc with an explicit control socket specification, no configuration file is loaded because it's not needed.
So there is an issue with the configuration access. But I don't understand why knotd is not affected?
You could also test using configuration database.
Daniel
On 12/21/18 2:44 PM, Rick van Rein wrote:
> Hi Daniel,
>
>> I guess it relates to a temporary confdb, which is created for storing configuration loaded from
>> a config file and removed upon. Could you try calling knotc with explicit socket parameter (knotc -s ...)?
>
> Yes, that solved it.
>
> The default socket path is @run_dir@/knot.sock, and the ports tree
> configures --with-rundir=/var/run/knot but, according to
> https://github.com/freebsd/freebsd-ports/blob/master/dns/knot2/Makefile#L32…
> it is under defined variables. I will ask Leo if this might need
> correction.
>
> Thanks!
> -Rick
>
Hi all,
one of my zones made a ZSK rollover yesterday. I had an some recursive
resolvers validation errors at different times. This is the log output
from knot of the rollover:
Dec 6 17:16:48 a knotd[9924]: info: [voja.de.] DNSSEC, signing zone
Dec 6 17:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, ZSK rollover
started
Dec 6 17:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, key, tag 53800,
algorithm RSASHA256, KSK, public, ready, active
Dec 6 17:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, key, tag 15820,
algorithm RSASHA256, public
Dec 6 17:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, key, tag 38188,
algorithm RSASHA256, public, active
Dec 6 17:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, signing started
Dec 6 17:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, successfully
signed
Dec 6 17:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, next signing at
2018-12-06T18:16:49
Dec 6 17:16:49 a knotd[9924]: info: [voja.de.] zone file updated,
serial 1543943808 -> 1544113009
Dec 6 17:16:50 a knotd[9924]: info: [voja.de.] notify, outgoing,
10.10.10.10@53: serial 1544113009
Dec 6 17:16:50 a knotd[9924]: info: [voja.de.] IXFR, outgoing,
10.10.10.10@45727: started, serial 1543943808 -> 1544113009
Dec 6 17:16:50 a knotd[9924]: info: [voja.de.] IXFR, outgoing,
10.10.10.10@45727: finished, 0.00 seconds, 1 messages, 1329 bytes
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, signing zone
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, key, tag 53800,
algorithm RSASHA256, KSK, public, ready, active
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, key, tag 38188,
algorithm RSASHA256, public
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, key, tag 15820,
algorithm RSASHA256, public, active
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, signing started
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, successfully
signed
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, next signing at
2018-12-06T19:16:49
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] zone file updated,
serial 1544113009 -> 1544116609
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] notify, outgoing,
10.10.10.10@53: serial 1544116609
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] IXFR, outgoing,
10.10.10.10@53131: started, serial 1544113009 -> 1544116609
Dec 6 18:16:49 a knotd[9924]: info: [voja.de.] IXFR, outgoing,
10.10.10.10@53131: finished, 0.00 seconds, 1 messages, 43889 bytes
Dec 6 18:16:50 a knotd[9924]: info: [voja.de.] AXFR, outgoing,
10.10.10.10@59417: started, serial 1544116609
Dec 6 18:16:50 a knotd[9924]: info: [voja.de.] AXFR, outgoing,
10.10.10.10@59417: finished, 0.00 seconds, 1 messages, 28054 bytes
Dec 6 19:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, signing zone
Dec 6 19:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, key, tag 53800,
algorithm RSASHA256, KSK, public, ready, active
Dec 6 19:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, key, tag 15820,
algorithm RSASHA256, public, active
Dec 6 19:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, signing started
Dec 6 19:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, successfully
signed
Dec 6 19:16:49 a knotd[9924]: info: [voja.de.] DNSSEC, next signing at
2018-12-07T15:16:48
Dec 6 19:16:49 a knotd[9924]: info: [voja.de.] zone file updated,
serial 1544116609 -> 1544120209
Dec 6 19:16:49 a knotd[9924]: info: [voja.de.] notify, outgoing,
10.10.10.10@53: serial 1544120209
Dec 6 19:16:49 a knotd[9924]: info: [voja.de.] IXFR, outgoing,
10.10.10.10@55161: started, serial 1544116609 -> 1544120209
Dec 6 19:16:49 a knotd[9924]: info: [voja.de.] IXFR, outgoing,
10.10.10.10@55161: finished, 0.00 seconds, 1 messages, 1329 bytes
10.10.10.10 is the (anonymized) IP of the distribution server, which is
a Bind server. The actual authorative nameservers get the zone from Bind
with IFXR or AXFR. AXFR is used for distribution to a anycast nameserver
pair.
When looking at the ZSK rollover timing, I notice that after two hours
Knot stopped signing with the old ZSK. Does this make sense? The last
event before the rollover has been this resining:
Dec 4 18:16:48 a knotd[9924]: info: [voja.de.] DNSSEC, signing zone
Dec 4 18:16:48 a knotd[9924]: info: [voja.de.] DNSSEC, key, tag 53800,
algorithm RSASHA256, KSK, public, ready, active
Dec 4 18:16:48 a knotd[9924]: info: [voja.de.] DNSSEC, key, tag 38188,
algorithm RSASHA256, public, active
Dec 4 18:16:48 a knotd[9924]: info: [voja.de.] DNSSEC, signing started
Dec 4 18:16:48 a knotd[9924]: info: [voja.de.] DNSSEC, successfully
signed
Dec 4 18:16:48 a knotd[9924]: info: [voja.de.] DNSSEC, next signing at
2018-12-06T17:16:48
Is it possible that this is an issue with a propagation-delay that is
too low (default value applies).
Regards
Volker
OK, thanks. Just making sure I was not missing something obvious.
-----Original Message-----
From: "libor.peltan" [libor.peltan(a)nic.cz]
Date: 11/29/2018 04:33 AM
To: knot-dns-users(a)lists.nic.cz
Subject: Re: [knot-dns-users] Key sizes with ECDSA
Hi Full Name,
indeed, this is not possible. The ECC and EDD algorithm families always
stick to one key size for any algorithm. You can't have your KSK and ZSK
with different algorithms.
On the other hand, this is no big deal. Those algorithms are considered
safe enough even with small keys, so you can choose just e.g. ECDSA256
and profit from having small signatures. You can also think of using
single-type signing scheme.
BR,
Libor
Dne 28.11.18 v 22:58 Full Name napsal(a):
> A policy section in knot.conf would contain (among other things) an algorithm specification and (optionally) the KSK and ZSK keys sizes. This works fine for RSA. Now imagine that I want to establish a policy with ECC keys for both KSKs and ZSKs. However, I might want for the KSKs to be 384-bit keys, and for the ZSKs to be 256-bit keys. Can a policy be created in Knot to do so? It would seem that, given that the algorithm specification for NIST elliptic curves includes both the curve and digest data, the key size specifications do not apply here - i.e. both KSKs and ZSKs will necessarily use the same curve, and therefore the same key size. Is this correct?
--
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-dns-users
A policy section in knot.conf would contain (among other things) an algorithm specification and (optionally) the KSK and ZSK keys sizes. This works fine for RSA. Now imagine that I want to establish a policy with ECC keys for both KSKs and ZSKs. However, I might want for the KSKs to be 384-bit keys, and for the ZSKs to be 256-bit keys. Can a policy be created in Knot to do so? It would seem that, given that the algorithm specification for NIST elliptic curves includes both the curve and digest data, the key size specifications do not apply here - i.e. both KSKs and ZSKs will necessarily use the same curve, and therefore the same key size. Is this correct?
Hi Christian,
I am glad to hear that. Let us know if you have any other issues.
Best regards,
Mark
On 2018-11-27 14:04, Christian Petrasch wrote:
> Hello Mark,
>
> I was able to sign with Knot and SoftHSM. I switched to an actual
> version of gnutls and recompiled Knot.
>
> Thank you very much for your good support
>
> best regards
> --
> Christian Petrasch
> Product Owner
> Zone Creation & Signing
> IT-Services
>
> DENIC eG
> Kaiserstraße 75-77
> 60329 Frankfurt am Main
> GERMANY
>
> E-Mail: petrasch(a)denic.de
> http://www.denic.de
>
> PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E
> 8841 549B E0AE
>
> Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
> Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr.
> Jörg Schweiger
> Vorsitzender des Aufsichtsrats: Thomas Keller
> Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
> Frankfurt am Main
>
> Von: "Mark Karpilovskij" <mark.karpilovskij(a)nic.cz>
> An: "Christian Petrasch" <petrasch(a)denic.de>
> Datum: 26.11.2018 16:22
> Betreff: Re: [knot-dns-users] Problem to import key material of
> softhsm into knot
>
> -------------------------
>
> Hi Christian,
>
> also, did you build Knot manually or did you use a package? What is
> your current GnuTLS version? It should be at least 3.4.6 for the HSM
> to work, at least according to our documentation.
>
> BR,
>
> Mark
>
> On 26. 11. 18 16:09, Mark Karpilovskij wrote:
>
> Hi Christian,
>
> I have verified that it is indeed necessary for Knot to use full
> length key IDs with PKCS #11, so make sure you do that. Other than
> that, I am quite puzzled by the "FAILED TO INITIALIZE KASP (NOT
> IMPLEMENTED)" error that you are getting and so far I have been unable
> to reproduce it. I will spend some more time on it. Which version of
> CentOS are you using? Meanwhile, see if both setting correct policy
> configuration and using full length key IDs will help you.
>
> Best regards,
>
> Mark
>
> On 26. 11. 18 12:31, Christian Petrasch wrote:
> Hi Mark,
>
> thanks a lot for you help..
>
> I added the keystore to my config.. but I_m getting another error
> now..
>
> # See knot.conf(5) manual page for documentation.
>
> server:
> listen: [ 127.0.0.1@53, ::1@53 ]
>
> keystore:
>
> # KSK
> - id: a1a1
> backend: pkcs11
> config: "pkcs11:token=testKSK_1;pin-value=5678
> /usr/local/lib/softhsm/libsofthsm2.so"
>
> # ZSK
> - id: a1b1
> backend: pkcs11
> config: "pkcs11:token=testKSK_1;pin-value=5678
> /usr/local/lib/softhsm/libsofthsm2.so"
>
> policy:
> - id: manual
> manual: on
> keystore: a1b1
> nsec3: on
> nsec3-iterations: 16
> nsec3-opt-out: on
> nsec3-salt-length: 8
>
> zone:
> - domain: example.com
> dnssec-signing: on
> dnssec-policy: manual
> zonefile-load: difference
> file: example.com.zone
> storage: /etc/knot/
>
> log:
> - target: syslog
> any: debug
>
> ###
>
> [root@centos-test2 ~]# keymgr -c /etc/knot/knot.conf example.com.
> import-pkcs11 a1b1 algorithm=RSASHA256 size=2048 ksk=no
> created=20181126090000 publish=20181126090000 retire=+10mo remove=+1y
> Failed to initialize KASP (not implemented)
>
> I tried with the -d parameter as well.. but i got:
>
> keymgr -d /var/lib/knot/keys/ example.com. import-pkcs11 a1b1
> algorithm=RSASHA256 size=2048 ksk=no created=20181126090000
> publish=20181126090000 retire=+10mo remove=+1y
> Error (not exists)
>
> I read from former knot versions about the "keymgr init" command, but
> it is not implemented anymore..
>
> Do you have another idea whats going wrong.. ?
>
> Thanks a lot for your great help :)
>
> best regards
>
> --
> Christian Petrasch
> Product Owner
> Zone Creation & Signing
> IT-Services
>
> DENIC eG
> Kaiserstraße 75-77
> 60329 Frankfurt am Main
> GERMANY
>
> E-Mail: petrasch(a)denic.de
> http://www.denic.de
>
> PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E
> 8841 549B E0AE
>
> Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
> Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr.
> Jörg Schweiger
> Vorsitzender des Aufsichtsrats: Thomas Keller
> Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
> Frankfurt am Main
>
> Von: "Mark Karpilovskij" <mark.karpilovskij(a)nic.cz>
> An: "Christian Petrasch" <petrasch(a)denic.de>
> Kopie: knot-dns-users(a)lists.nic.cz
> Datum: 26.11.2018 11:56
> Betreff: Re: [knot-dns-users] Problem to import key material of
> softhsm into knot
>
> -------------------------
>
> Hi Christian,
>
> I have checked out your Knot configuration, and I suspect that the
> issue might be a missing keystore option in the policy section of your
> configuration. Try specifying the ID of the PKCS11 keystore in the
> policy section as follows:
>
> keystore:
> - id: a1a1
> backend: pkcs11
> config: "pkcs11:token=testKSK_1;pin-value=5678
> /usr/local/lib/softhsm/libsofthsm2.so"
>
> - id: a1b1
> backend: pkcs11
> config: "pkcs11:token=testKSK_1;pin-value=5678
> /usr/local/lib/softhsm/libsofthsm2.so"
>
> policy:
> - id: manual
> manual: on
> KEYSTORE: A1A1
> nsec3: on
> nsec3-iterations: 16
> nsec3-opt-out: on
> nsec3-salt-length: 8
>
> Let us know if this helps.
>
> Best regards,
>
> Mark
>
> On 26. 11. 18 9:49, Christian Petrasch wrote:
> Hi @ all,
>
> we are testing with softhsm 2.5 and KNOT 2.7.4...
>
> I try to import the keys inside softhsm into keymgr to sign with this
> a example zone.
>
> The keymaterial is shown via pkcs11-tool:
>
> [root@centos-test2 ~]# pkcs11-tool --login --list-objects --module
> /usr/local/lib/softhsm/libsofthsm2.so
>
> Using slot 0 with a present token (0x285d1c08)
> Logging in to "testKSK_1".
> Please enter User PIN:
> Private Key Object; RSA
> label: testKSK_1
> ID: a1a1
> Usage: decrypt, sign, unwrap
> Public Key Object; RSA 1024 bits
> label: testZSK_1
> ID: a1b1
> Usage: encrypt, verify, wrap
> Private Key Object; RSA
> label: testZSK_1
> ID: a1b1
> Usage: decrypt, sign, unwrap
> Public Key Object; RSA 2048 bits
> label: testKSK_1
> ID: a1a1
> Usage: encrypt, verify, wrap
>
> ######
>
> The KNOT config is :
>
> [root@centos-test2 ~]# cat /etc/knot/knot.conf
> # See knot.conf(5) manual page for documentation.
>
> server:
> listen: [ 127.0.0.1@53, ::1@53 ]
>
> keystore:
> - id: a1a1
> backend: pkcs11
> config: "pkcs11:token=testKSK_1;pin-value=5678
> /usr/local/lib/softhsm/libsofthsm2.so"
>
> - id: a1b1
> backend: pkcs11
> config: "pkcs11:token=testKSK_1;pin-value=5678
> /usr/local/lib/softhsm/libsofthsm2.so"
>
> policy:
> - id: manual
> manual: on
> nsec3: on
> nsec3-iterations: 16
> nsec3-opt-out: on
> nsec3-salt-length: 8
>
> zone:
> - domain: example.com
> dnssec-signing: on
> dnssec-policy: manual
> zonefile-load: difference
> file: example.com.zone
> storage: /etc/knot/
>
> log:
> - target: syslog
> any: debug
>
> ###################
>
> And if I try to import the key into keymgr i run the command:
>
> [root@centos-test2 ~]# keymgr -c /etc/knot/knot.conf example.com.
> import-pkcs11 a1a1 algorithm=RSASHA256 size=2048 ksk=yes
> created=20181126090000 publish=20181126090000 retire=+10mo remove=+1y
> Error (not exists)
>
> ###
>
> I don't know how I can fix this.. maybe anybody can help me ? The
> documentation of KNOT is very good.. but at this point it is a little
> bit insufficient. Does anybody has examples for this ?
>
> Thanks a lot in advance for the help..
>
> best regards
>
> --
> Christian Petrasch
> Product Owner
> Zone Creation & Signing
> IT-Services
>
> DENIC eG
> Kaiserstraße 75-77
> 60329 Frankfurt am Main
> GERMANY
>
> E-Mail: petrasch(a)denic.de
> http://www.denic.de
>
> PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E
> 8841 549B E0AE
>
> Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
> Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr.
> Jörg Schweiger
> Vorsitzender des Aufsichtsrats: Thomas Keller
> Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
> Frankfurt am Main
Hi @ all,
we are testing with softhsm 2.5 and KNOT 2.7.4...
I try to import the keys inside softhsm into keymgr to sign with this a
example zone.
The keymaterial is shown via pkcs11-tool:
[root@centos-test2 ~]# pkcs11-tool --login --list-objects --module
/usr/local/lib/softhsm/libsofthsm2.so
Using slot 0 with a present token (0x285d1c08)
Logging in to "testKSK_1".
Please enter User PIN:
Private Key Object; RSA
label: testKSK_1
ID: a1a1
Usage: decrypt, sign, unwrap
Public Key Object; RSA 1024 bits
label: testZSK_1
ID: a1b1
Usage: encrypt, verify, wrap
Private Key Object; RSA
label: testZSK_1
ID: a1b1
Usage: decrypt, sign, unwrap
Public Key Object; RSA 2048 bits
label: testKSK_1
ID: a1a1
Usage: encrypt, verify, wrap
######
The KNOT config is :
[root@centos-test2 ~]# cat /etc/knot/knot.conf
# See knot.conf(5) manual page for documentation.
server:
listen: [ 127.0.0.1@53, ::1@53 ]
keystore:
- id: a1a1
backend: pkcs11
config: "pkcs11:token=testKSK_1;pin-value=5678
/usr/local/lib/softhsm/libsofthsm2.so"
- id: a1b1
backend: pkcs11
config: "pkcs11:token=testKSK_1;pin-value=5678
/usr/local/lib/softhsm/libsofthsm2.so"
policy:
- id: manual
manual: on
nsec3: on
nsec3-iterations: 16
nsec3-opt-out: on
nsec3-salt-length: 8
zone:
- domain: example.com
dnssec-signing: on
dnssec-policy: manual
zonefile-load: difference
file: example.com.zone
storage: /etc/knot/
log:
- target: syslog
any: debug
###################
And if I try to import the key into keymgr i run the command:
[root@centos-test2 ~]# keymgr -c /etc/knot/knot.conf example.com.
import-pkcs11 a1a1 algorithm=RSASHA256 size=2048 ksk=yes
created=20181126090000 publish=20181126090000 retire=+10mo remove=+1y
Error (not exists)
###
I don't know how I can fix this.. maybe anybody can help me ? The
documentation of KNOT is very good.. but at this point it is a little bit
insufficient. Does anybody has examples for this ?
Thanks a lot in advance for the help..
best regards
--
Christian Petrasch
Product Owner
Zone Creation & Signing
IT-Services
DENIC eG
Kaiserstraße 75-77
60329 Frankfurt am Main
GERMANY
E-Mail: petrasch(a)denic.de
http://www.denic.de
PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E 8841
549B E0AE
Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr. Jörg
Schweiger
Vorsitzender des Aufsichtsrats: Thomas Keller
Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
Frankfurt am Main
Hello,
first of all I would like to express many thanks to the CZ.NIC DNS team
for an amazing piece of software which the KnotDNS in my view surely is.
Well, to my question. I run two instances of knot 2.6.9 in the
master-slave configuration which serve a couple of zones. The zones are
DNSSEC signed at master with an automated key management. This works
excellent even with the KSK rotation (I am under .cz TLD). However, I
also have a subdomain (i.e., 3rd order domain) with synthesized records.
The only way to allow DNSSEC for it I was able to find is:
- using mod-onlinesign on both the master and slave,
- generating a key externally (with bind-utils) and importing it into
KASP on both servers,
- configuring manual key policy,
- adding the appropriate DS record into the parent zone.
This seems to work fine, all the validation tests pass.
The question is: Is there a better way to achieve the goal (especially
with new features like automated key rotation in online signing of the
2.7 version in mind) or what is the recommended practice in a similar
situation?
Thanks in advance for any suggestion or advice,
Have a nice day,
Oto
Hi,
is there any knot-dns configuration parameter to define the SALT string
for NSEC3 ?
I have :
nsec3: BOOL
nsec3-iterations: INT
nsec3-opt-out: BOOL
nsec3-salt-length: INT
but nothing to configure the string..
Does anybody has an idea ?
Any help would be really appreciated..
thanks a lot
best regards
--
Christian Petrasch
Product Owner
Zone Creation & Signing
IT-Services
DENIC eG
Kaiserstraße 75-77
60329 Frankfurt am Main
GERMANY
E-Mail: petrasch(a)denic.de
http://www.denic.de
PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E 8841
549B E0AE
Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr. Jörg
Schweiger
Vorsitzender des Aufsichtsrats: Thomas Keller
Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
Frankfurt am Main
Feedback much appreciated. Thanks.
-----Original Message-----
From: "" [daniel.salzman(a)nic.cz]
Date: 11/08/2018 03:43 AM
To: "Full Name" <nuncestbibendum(a)excite.com>
CC: knot-dns-users(a)lists.nic.cz
Subject: Re: [knot-dns-users] Dealing with limited capacity key storage
On 2018-11-08 01:04, Full Name wrote:
> By default, Knot will use the local file system as its key storage. I
> believe that, when using the SoftHSM backend, the same is true. For
> most practical purposes, the implication is that the key storage has
> an unlimited capacity for keys. Now when using an actual HSM, that is
> not true - most HSMs will, in general, have a relatively modest keys
> storage capacity, especially when compared to that of a local
> filesystem.
>
Yes, that is correct.
> Does Knot have with capabilities to deal with such situations? If
> I need to have 150 keys in my key storage, but my key storage can't
> hold more than 100, how does Knot deal with this? Conceptually, one
> only has to wrap the keys in the HSM appropriately and dump then to
> disk - where they will remain inaccessible to anybody but the HSM.
> After this, one can generate (or unwrap) more keys, and use them as
> necessary. Is this something that Knot can already do?
The only solution with Knot DNS is using shared keys
https://www.knot-dns.cz/docs/2.7/singlehtml/index.html#ksk-shared.
Also Single-Type Signing Scheme could help to reduce the number of keys
https://www.knot-dns.cz/docs/2.7/singlehtml/index.html#single-type-signing.
Daniel
Hi,
i work on a python library based on python/libknot/control.py.
I'm at the point to add/remove resource record to a configured zone.
I have tried a lot, but got no positive result:
"invalid parameter (data: COMMAND = b'zone-set', ERROR = b'invalid parameter', ZONE = b'domain.tld.', OWNER = b'www.domain.tld.', TYPE = b'A', DATA = b'127.0.0.1')"
The full test script debug output:
DEBUG: _openSocket
DEBUG: _zone-begin
DEBUG: _cmd: {'cmd': 'zone-begin'}
DEBUG: _cmd: {'cmd': 'zone-set', 'zone': 'domain.tld.', 'owner': 'www.domain.tld.', 'rtype': 'A', 'data': '127.0.0.1'}
DEBUG: _zone-abort
DEBUG: _cmd: {'cmd': 'zone-abort'}
invalid parameter (data: COMMAND = b'zone-set', ERROR = b'invalid parameter', ZONE = b'domain.tld.', OWNER = b'www.domain.tld.', TYPE = b'A', DATA = b'127.0.0.1')
DEBUG: _zone-begin
DEBUG: _cmd: {'cmd': 'zone-begin'}
DEBUG: _cmd: {'cmd': 'zone-get', 'zone': 'domain.tld.', 'owner': 'ns1.domain.tld.'}
DEBUG: _zone-commit
DEBUG: _cmd: {'cmd': 'zone-commit'}
resp: {
"domain.tld.": {
"ns1.domain.tld.": {
"A": {
"ttl": "86400",
"data": [
"192.168.120.211"
]
}
}
}
}
DEBUG: _closeSocket
The "_cmd" call uses libknot.control.send_block[1] internally, the following
"{…}" held the function parameters.
[1] https://gitlab.labs.nic.cz/knot/knot-dns/blob/master/python/libknot/control…
It needs some time to discover "owner" as the query filter for "zone-get", which
wasn't that obvious.
Now I'm wondering how set the parameter, for adding a resource record to a
zone.
- frank
--
Frank Matthieß Stapenhorster Straße 42b, DE 33615 Bielefeld
Mail: frank.matthiess(a)virtion.de
GnuPG: 9F81 BD57 C898 6059 86AA 0E9B 6B23 DE93 01BB 63D1
By default, Knot will use the local file system as its key storage. I believe that, when using the SoftHSM backend, the same is true. For most practical purposes, the implication is that the key storage has an unlimited capacity for keys. Now when using an actual HSM, that is not true - most HSMs will, in general, have a relatively modest keys storage capacity, especially when compared to that of a local filesystem.
Does Knot have with capabilities to deal with such situations? If I need to have 150 keys in my key storage, but my key storage can't hold more than 100, how does Knot deal with this? Conceptually, one only has to wrap the keys in the HSM appropriately and dump then to disk - where they will remain inaccessible to anybody but the HSM. After this, one can generate (or unwrap) more keys, and use them as necessary. Is this something that Knot can already do?
I found out what the heck it was, at last. First, softhsm was correctly initialized - no problem there. The issue was that knot.conf file was the following ( the relevant portions alone):
keystore:
- id: SoftHSM
backend: pkcs11
config: "pkcs11:model=SoftHSM%20v2;manufacturer=SoftHSM%20project;serial=1eb6899f1f278686;token=SoftHSMToken;pin-value=SoftHSMPIN /usr/lib/softhsm/libsofthsm2.so"
policy:
- id: manual
manual: on
The policy section was missing a keystore entry, which caused Knot to fall back to the default key store. After I added
keystore: SoftHSM
everything worked as expected.
-----Original Message-----
From: "" [daniel.salzman(a)nic.cz]
Date: 11/07/2018 08:43 AM
To: "Full Name" <nuncestbibendum(a)excite.com>
CC: knot-dns-users(a)lists.nic.cz
Subject: Re: [knot-dns-users] Knot refusing to use the PKCS #11 interface
On 2018-11-05 17:47, Full Name wrote:
> I am sorry; I made a mistake when pasting the knot.conf contents here
> - I am using the module path all right, and it makes no difference. In
> fact, the issue seems to be with the knot.conf parser - be it because
> I am doing things incorrectly myself, or because it is broken. I have
> noticed the same in Knot 2.6.9 and 2.7.3.
>
So what is your exact configuration of the keystore?
> Can anyone throw some light on this? What else has one got to do to
> get Knot to use the PKCS #11 interface for the key store? I have the
> necessary library (softHSM) plus the correct data in knot.conf. But
> the keymgr function is not using the PKCS #11 interface. What am I
> missing?
>
I think you have to learn how to initialize a PKCS #11 token first.
Try something like:
export SOFTHSM2_CONF=/path/to//softhsm.conf
softhsm2-util --init-token --slot=0 --label="knot" --pin=1234
--so-pin=12345 --module=/usr/lib/softhsm/libsofthsm2.so
and follow https://www.knot-dns.cz/docs/2.7/singlehtml/index.html#config
...
> I provided some debugging traces in a separate message to
> illustrate the issue. I'll be happy to furnish more data, if somebody
> knowledgeable on the Knot internals lets me know what traces to
> provide. I really need to be able to get Knot to use the PKCS #11
> interface.
>
>
>
> -----Original Message-----
> From: "" [daniel.salzman(a)nic.cz]
> Date: 11/02/2018 05:39 AM
> To: "Full Name" <nuncestbibendum(a)excite.com>
> CC: knot-dns-users(a)lists.nic.cz
> Subject: Re: [knot-dns-users] Knot refusing to use the PKCS #11
> interface
>
> Hello Full Name,
>
> The pkcs11 keystore configuration should have the form of
> "<pkcs11-url> <module-path>". I will improve the documentation.
>
> Daniel
>
> On 2018-11-01 18:04, Full Name wrote:
>> I have a knot.conf file with the following keystore section:
>>
>> keystore:
>> - id: TheBackend
>> backend: pkcs11
>> config:
>> "pkcs11:model=p11-kit-trust;manufacturer=PKCS%2311%20Kit;serial=1;token=System
>> Trust"
>>
>> where the value assigned to the config keyword is obtained from the
>> output from the GnuTLS p11tool command:
>>
>> $ p11tool --list-tokens
>> Token 0:
>> URL:
>> pkcs11:model=p11-kit-trust;manufacturer=PKCS%2311%20Kit;serial=1;token=System%20Trust
>> Label: System Trust
>> Type: Trust module
>> Flags: uPIN uninitialized
>> Manufacturer: PKCS#11 Kit
>> Model: p11-kit-trust
>> Serial: 1
>> Module: p11-kit-trust.so
>>
>> Also in knot.conf I have
>>
>> policy:
>> - id: manual
>> manual: on
>>
>> zone:
>> - domain: example.com
>> storage: /var/lib/knot/zones/
>> file: example.com.zone
>> dnssec-signing: on
>> dnssec-policy: manual
>>
>> With all this in place, I launched the following from the CLI:
>>
>> # keymgr example.com. generate algorithm=ECDSAP256SHA256
>>
>> This does not seem to be using the PKCS #11 library, as instructed in
>> knot.conf. I debugged the command above and noticed that, at some
>> before the signing operation itself is addressed, the keystore_load
>> function from the Knot code base is invoked. This function takes
>> several arguments, the second of which is a backend identifier.
>> According to the keystore entry in knot.conf, this should be the PKCS
>> #11 identifier KEYSTORE_BACKEND_PKCS11. However, what I see with the
>> debugger is that the backend argument is, in fact,
>> KEYSTORE_BACKEND_PEM.
>>
>> Even more intriguing (to somebody unfamiliar with the internal
>> workings of Knot, at least) is that, before keystore_load is invoked,
>> the check_keystore function is invoked and it evaluates the following
>> conditional:
>>
>> if (conf_opt(&backend) == KEYSTORE_BACKEND_PKCS11 &&
>> conf_str(&config) == NULL)
>>
>> This conditional clearly succeeds - i.e. at that point the backend has
>> been correctly identified as PKCS #11. But, like I said above, when
>> keystore_load gets called later on, such is not the case any longer.
>>
>> Any idea as to what is going on here? Why is PKCS #11 not being used?
>> In the config string above in knot.conf I tried replacing %23 and %20
>> with # and the space character, respectively. It made no difference.
>> This all is happening with Knot 2.7.3.
I am sorry; I made a mistake when pasting the knot.conf contents here - I am using the module path all right, and it makes no difference. In fact, the issue seems to be with the knot.conf parser - be it because I am doing things incorrectly myself, or because it is broken. I have noticed the same in Knot 2.6.9 and 2.7.3.
Can anyone throw some light on this? What else has one got to do to get Knot to use the PKCS #11 interface for the key store? I have the necessary library (softHSM) plus the correct data in knot.conf. But the keymgr function is not using the PKCS #11 interface. What am I missing?
I provided some debugging traces in a separate message to illustrate the issue. I'll be happy to furnish more data, if somebody knowledgeable on the Knot internals lets me know what traces to provide. I really need to be able to get Knot to use the PKCS #11 interface.
-----Original Message-----
From: "" [daniel.salzman(a)nic.cz]
Date: 11/02/2018 05:39 AM
To: "Full Name" <nuncestbibendum(a)excite.com>
CC: knot-dns-users(a)lists.nic.cz
Subject: Re: [knot-dns-users] Knot refusing to use the PKCS #11 interface
Hello Full Name,
The pkcs11 keystore configuration should have the form of
"<pkcs11-url> <module-path>". I will improve the documentation.
Daniel
On 2018-11-01 18:04, Full Name wrote:
> I have a knot.conf file with the following keystore section:
>
> keystore:
> - id: TheBackend
> backend: pkcs11
> config:
> "pkcs11:model=p11-kit-trust;manufacturer=PKCS%2311%20Kit;serial=1;token=System
> Trust"
>
> where the value assigned to the config keyword is obtained from the
> output from the GnuTLS p11tool command:
>
> $ p11tool --list-tokens
> Token 0:
> URL:
> pkcs11:model=p11-kit-trust;manufacturer=PKCS%2311%20Kit;serial=1;token=System%20Trust
> Label: System Trust
> Type: Trust module
> Flags: uPIN uninitialized
> Manufacturer: PKCS#11 Kit
> Model: p11-kit-trust
> Serial: 1
> Module: p11-kit-trust.so
>
> Also in knot.conf I have
>
> policy:
> - id: manual
> manual: on
>
> zone:
> - domain: example.com
> storage: /var/lib/knot/zones/
> file: example.com.zone
> dnssec-signing: on
> dnssec-policy: manual
>
> With all this in place, I launched the following from the CLI:
>
> # keymgr example.com. generate algorithm=ECDSAP256SHA256
>
> This does not seem to be using the PKCS #11 library, as instructed in
> knot.conf. I debugged the command above and noticed that, at some
> before the signing operation itself is addressed, the keystore_load
> function from the Knot code base is invoked. This function takes
> several arguments, the second of which is a backend identifier.
> According to the keystore entry in knot.conf, this should be the PKCS
> #11 identifier KEYSTORE_BACKEND_PKCS11. However, what I see with the
> debugger is that the backend argument is, in fact,
> KEYSTORE_BACKEND_PEM.
>
> Even more intriguing (to somebody unfamiliar with the internal
> workings of Knot, at least) is that, before keystore_load is invoked,
> the check_keystore function is invoked and it evaluates the following
> conditional:
>
> if (conf_opt(&backend) == KEYSTORE_BACKEND_PKCS11 &&
> conf_str(&config) == NULL)
>
> This conditional clearly succeeds - i.e. at that point the backend has
> been correctly identified as PKCS #11. But, like I said above, when
> keystore_load gets called later on, such is not the case any longer.
>
> Any idea as to what is going on here? Why is PKCS #11 not being used?
> In the config string above in knot.conf I tried replacing %23 and %20
> with # and the space character, respectively. It made no difference.
> This all is happening with Knot 2.7.3.
Hi,
i have the issue, that the example wont work, because there ist no "libknot.so"
on my system:
root@f-dns4:~# ldconfig -v 2>/dev/null | grep libknot
libknot.so.8 -> libknot.so.8.0.0
root@f-dns4:~# lsb_release -a; dpkg-query -W knot
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
root@f-dns4:~# dpkg-query -W knot
knot 2.7.3-1~ubuntu18.04.1ppa2
I has made a patch for that, with a search order, if no path is given.
Hopefully this help to solve this issue.
Greetings
Frank
--
Frank Matthieß Stapenhorster Straße 42b, DE 33615 Bielefeld
Mail: frank.matthiess(a)virtion.de
GnuPG: 9F81 BD57 C898 6059 86AA 0E9B 6B23 DE93 01BB 63D1
Hi,
I'm trying to spin a nice transaction model around libknot, intended for
better automation, as attached. I'm running into problems.
First and foremost, a lack of understanding what should go into what
fields when doing other commands than in the "stats" demos. Are there
more advanced explanations of what goes into what fields? I tried
looking into knotc, but it is rather long. Perhaps there is a log
somewhere with the stuff going into knotd?
Secondly, the model by which transactions abort is not always logical.
Like a commit() that fails but does not turn into an abort() when
semantic errors are found. I will run zone-check beforehand, to ensure
that this does not happen.
Thirdly, I found that the library freezes when supplied with unknown
commands or apparently funny data...
>>> import knotcontrol
>>> kc = knotcontrol.KnotControl()
>>> kc = knotcontrol.KnotControl ()
>>> kc.knot (cmd='stats')
KnotControl send {'cmd': 'stats'}
KnotControl recv {'server': {'zone-count': ['1']}}
{'server': {'zone-count': ['1']}}
>>> kc.knot (cmd='zone-stats',item='vanrein.org')
KnotControl send {'item': 'vanrein.org', 'cmd': 'zone-stats'}
KnotControl recv {}
{}
>>> kc.knot (cmd='zone-recv',item='vanrein.org')
KnotControl send {'item': 'vanrein.org', 'cmd': 'zone-recv'}
...and at this point it freezes, even to ^C -- before and after this
sequence, "kdig @localhost vanrein.org soa" worked to query the zone in
Knot DNS.
Sorry to be testing this early :-S but I'm eager to use it this way.
-Rick
Hello,
could you please send me a short guide how to delegate a zone subnet (class)
to different NS.
I have a zone i.e. 0.10.in-addr.arpa.zone
where I would like to delegate a part of the zone 10.0.0.104/29 (255.255.
255.248) to different NS. It includes 8 IP addresses (10.0.0.104, 10.0.
0.105, 10.0.0.106 .... 10.0.0.111)
then I create a record in the mentioned zone like
104/29.0.0.10.in-addr.arpa. 300 NS my.test.server.
I tried to understand the syntax in RFC 2317 (https://tools.ietf.org/html/
rfc2317) but it seems to be wrongly implemented from my side.
When I tried to "dig" records I have to ask next IP address in the row not
like dig -x 10.0.0.105 bud like dig -x 10.0.0.104/29.105
Many thanks for the clarification,
best regards
Milan.
Hi,
I am following Daniel's advise
> Consider controlling the server directly using our
> python wrapper:
in trying to get more transactional rigour through the Python library
from Git. The library certainly is easier to integrate!
But I think I hit upon a bug. The code
def __init__ (self, socketpath='/var/run/knot/knot.sock'):
"""Connect to knotd
"""
self.ctl = None
self.txn_conf = False
self.txn_zone = set ()
self.txn_success = True
self.ctl = libknot.control.KnotCtl ()
self.ctl.connect (socketpath)
self.ctl.set_timeout (3600)
seems to not set any timeout; it always cuts off after 5s sleep, and
never after 4s sleep. This is true for the first command as well as
later ones. Same when setting the timeout to 7200, so it's not a ms/s
issue.
This code is almost literally the same as in StatsServer, except that
demo runs too fast to detect timeout problems. I ran into it when
working interactively.
Ideally, I would want to disable connection timeouts altogether, so it
is possible to abort any open transactions upon whatever cause for
termination there might be; this should benefit stability.
-Rick
Hello,
I want to switch to KnotDNS on my private zone (bt909.de). I've installed the knot2 port as binary package within a FreeBSD jail. I configured my zones and zone transfer works fine, but KnotDNS didn't answer any query. I have a acl for the zone transfer, is there anything I need to do, that knot answers my queries? Knot is running, I found nothing in the logs, even the port ist open, but Knot just does nothing and my queries run into timeouts.
I tried NSD in the same jail, which works fine, but I want to use KnotDNS.
Regards, Thomas
--
Thomas Belian; https://bt909.de
I have a knot.conf file with the following keystore section:
keystore:
- id: TheBackend
backend: pkcs11
config: "pkcs11:model=p11-kit-trust;manufacturer=PKCS%2311%20Kit;serial=1;token=System Trust"
where the value assigned to the config keyword is obtained from the output from the GnuTLS p11tool command:
$ p11tool --list-tokens
Token 0:
URL: pkcs11:model=p11-kit-trust;manufacturer=PKCS%2311%20Kit;serial=1;token=System%20Trust
Label: System Trust
Type: Trust module
Flags: uPIN uninitialized
Manufacturer: PKCS#11 Kit
Model: p11-kit-trust
Serial: 1
Module: p11-kit-trust.so
Also in knot.conf I have
policy:
- id: manual
manual: on
zone:
- domain: example.com
storage: /var/lib/knot/zones/
file: example.com.zone
dnssec-signing: on
dnssec-policy: manual
With all this in place, I launched the following from the CLI:
# keymgr example.com. generate algorithm=ECDSAP256SHA256
This does not seem to be using the PKCS #11 library, as instructed in knot.conf. I debugged the command above and noticed that, at some before the signing operation itself is addressed, the keystore_load function from the Knot code base is invoked. This function takes several arguments, the second of which is a backend identifier. According to the keystore entry in knot.conf, this should be the PKCS #11 identifier KEYSTORE_BACKEND_PKCS11. However, what I see with the debugger is that the backend argument is, in fact, KEYSTORE_BACKEND_PEM.
Even more intriguing (to somebody unfamiliar with the internal workings of Knot, at least) is that, before keystore_load is invoked, the check_keystore function is invoked and it evaluates the following conditional:
if (conf_opt(&backend) == KEYSTORE_BACKEND_PKCS11 && conf_str(&config) == NULL)
This conditional clearly succeeds - i.e. at that point the backend has been correctly identified as PKCS #11. But, like I said above, when keystore_load gets called later on, such is not the case any longer.
Any idea as to what is going on here? Why is PKCS #11 not being used? In the config string above in knot.conf I tried replacing %23 and %20 with # and the space character, respectively. It made no difference. This all is happening with Knot 2.7.3.
More info on this:
The keymgr command invokes first the init_confile function. This results in the following backtrace:
#0 conf_db_get (conf=0x6bfcc0, txn=0x7fffffffe1b0,
key0=0x47b76a "\bkeystore", key1=0x47b761 "\abackend",
id=0x6c6648 "TheBackend", id_len=11, data=0x7fffffffdf70)
at knot/conf/confdb.c:704
#1 0x0000000000410018 in conf_rawid_get_txn (conf=0x6bfcc0,
txn=0x7fffffffe1b0, key0_name=0x47b76a "\bkeystore",
key1_name=0x47b761 "\abackend", id=0x6c6648 "TheBackend", id_len=11)
at knot/conf/conf.c:78
#2 0x0000000000417f4a in check_keystore (args=0x7fffffffe090)
at knot/conf/tools.c:268
#3 0x000000000041777d in conf_exec_callbacks (args=0x7fffffffe090)
at knot/conf/tools.c:62
#4 0x000000000040ebaa in finalize_previous_section (conf=0x6bfcc0,
txn=0x7fffffffe1b0, parser=0x6ed560, ctx=0x6c6620) at knot/conf/base.c:506
#5 0x000000000040ee8d in conf_parse (conf=0x6bfcc0, txn=0x7fffffffe1b0,
input=0x4783c7 "/etc/knot/knot.conf", is_file=true)
at knot/conf/base.c:592
#6 0x000000000040f128 in conf_import (conf=0x6bfcc0,
input=0x4783c7 "/etc/knot/knot.conf", is_file=true)
at knot/conf/base.c:669
#7 0x000000000040b677 in init_confile (
confile=0x4783c7 "/etc/knot/knot.conf")
at utils/keymgr/main.c:240
#8 0x000000000040bb70 in main (argc=4, argv=0x7fffffffe488)
at utils/keymgr/main.c:349
A few lines later, keymgr invokes key_command. I get the following backtrace:
#0 conf_db_get (conf=0x6bfcc0, txn=0x6bfce0, key0=0x47c042 "\bkeystore",
key1=0x47c04c "\abackend", id=0x0, id_len=0, data=0x7fffffffdf90)
at knot/conf/confdb.c:705
#1 0x00000000004101a3 in conf_id_get_txn (conf=0x6bfcc0, txn=0x6bfce0,
key0_name=0x47c042 "\bkeystore", key1_name=0x47c04c "\abackend",
id=0x7fffffffe090) at knot/conf/conf.c:109
#2 0x0000000000419117 in conf_id_get (conf=0x6bfcc0,
key0_name=0x47c042 "\bkeystore", key1_name=0x47c04c "\abackend",
id=0x7fffffffe090) at ./knot/conf/conf.h:138
#3 0x000000000041a516 in kdnssec_ctx_init (conf=0x6bfcc0, ctx=0x7fffffffe180,
zone_name=0x6a81d0 "\aexample\003com", from_module=0x0)
at knot/dnssec/context.c:169
#4 0x000000000040aee4 in key_command (argc=3, argv=0x7fffffffe490, optind=1)
at utils/keymgr/main.c:113
#5 0x000000000040bba9 in main (argc=4, argv=0x7fffffffe488)
at utils/keymgr/main.c:359
In both cases conf_db_get is eventually invoked. With a big difference: the first time, the information concerning the keystore id is present, as the value of id - namely, "TheBackend". With this value, conf_db_get returns KNOT_EOK, and the backend id is retrieved as KEYSTORE_BACKEND_PKCS11.
The second time, however, the value of this argument is just the NULL pointer. This causes conf_db_get to return KNOT_ENOENT, and the value of the backend is therefore eventually set to the default - KEYSTORE_BACKEND_PEM.
What am I doing wrong? What is causing for the second invocation to of conf_db_get to receive an id argument set to NULL? Is this the way it is supposed to work, or is it a bug in the Knot 2.7.3 code? If it is not, what do I have to do in order to get Knot to use the PKCS #11 interface? By the way, I forgot to mention in my previous email that Knot was built with PKCS #11 support all right.
Hi Sebastian,
1) that's OK. You don't need to worry about that warning unless you edit
the zonefile on the signer manually. You can also consider zonefile-less
signer, either completely headless (needs AXFR after each daemon start)
or with the zone stored in journal (needs some thoughts regarding
journal capacity policies). Check "zonefile-load", "journal-content",
"max-journal-db-size" and "max-journal-usage" options in config.
2) No, "discontinuity in changes history" is not expected. Could you
please describe what did you do before such warning appeared, with
longer snippets of the log? In any case, there is no need to be scared
of journal getting full, once you read the documentation ;)
https://www.knot-dns.cz/docs/2.7/singlehtml/index.html#journal-behaviour
BR,
Libor
Dne 29.10.18 v 13:07 Sebastian Wiesinger napsal(a):
> Right now I have two zone types for my knot test setup, one where knot
> is doing DNSSEC signing as a slave (AXFR in -> sign -> AXFR out) and
> one where the knot is the master for the zone and zone data is coming
> out of a git repository and gets signed.
>
> Reading older threads on this ML and browsing the configuration has
> led me to the following configuration and I wanted to make sure this
> is actually supported or if there is a best practice that is
> different.
>
>
> 1) Inline DNSSEC signing for slave zone.
>
> zone:
> - domain: example.com
> serial-policy: unixtime
> storage: "/var/lib/knot/slave"
> file: "%s.zone"
> zonefile-load: difference
> dnssec-signing: on
> dnssec-policy: rsa-de
> master: ns1_signer
> notify: ns1
> acl: acl_ns1
>
> policy:
> - id: rsa-de
> algorithm: RSASHA256
> ksk-size: 2048
> zsk-size: 1024
> ksk-submission: tld_de
>
>
> This seems to work fine, zone gets transferred from master (with low
> serial), signed and with a new unixtime serial transferred out again.
> I'm not sure if "zonefile-load: difference " makes any difference for
> a slave zone but without it I get warnings about possibly malformed
> IXFRs...
>
> 2) Inline DNSSEC for master zone from git:
>
> zone:
> - domain: dnssec-test.intern
> serial-policy: unixtime
> storage: "/var/lib/knot/master"
> file: "%s.zone"
> zonefile-sync: -1
> dnssec-signing: on
> dnssec-policy: rsa
> acl: acl_ns1
> zonefile-load: difference-no-serial
>
> policy:
> - id: rsa
> algorithm: RSASHA256
> ksk-size: 2048
> zsk-size: 1024
>
>
> This also works but I get warnings like this:
>
> [dnssec-test.intern.] journal, discontinuity in changes history
> (1540307365 -> 28), dropping older changesets
>
> Is this expected? Also I read in older threads that this might fill
> up the journal. Is that still the case?
>
>
> Best Regards
>
> Sebastian
>
Hi,
You/Daniel pointed me to the Python control library, but I cannot find
it in the 2.7.3 packages -- is that forgotten, or am I missing it?
Thanks,
-Rick
Hi,
I'm having a question about DNSSEC KSK rollover and obtaining the relevant
information for submission to the parent zone of the new key.
I'm currently using these steps:
- running "keymgr example.org list"
- manually identifying the new key using the parameters "ksk=yes" and having a
look at the created, publish, ready and active parameters. The new key always
seems to be the one with active=0 and I also check the dates of the other
parameters for plausibility. I then note the tag of the identified key.
- using "keymgr example.org dnskey <keytag>" or "keymgr example.org ds
<keytag>" to get the corresponding information for submission to the parent
zone.
Is there an easier way of achieving this, especially without the manual key
identification step? Ideally would be a single command I can run and specify
the zone of interest and it will output the dnskey and/or ds information of
the new key.
Thanks,
Maxi
Right now I have two zone types for my knot test setup, one where knot
is doing DNSSEC signing as a slave (AXFR in -> sign -> AXFR out) and
one where the knot is the master for the zone and zone data is coming
out of a git repository and gets signed.
Reading older threads on this ML and browsing the configuration has
led me to the following configuration and I wanted to make sure this
is actually supported or if there is a best practice that is
different.
1) Inline DNSSEC signing for slave zone.
zone:
- domain: example.com
serial-policy: unixtime
storage: "/var/lib/knot/slave"
file: "%s.zone"
zonefile-load: difference
dnssec-signing: on
dnssec-policy: rsa-de
master: ns1_signer
notify: ns1
acl: acl_ns1
policy:
- id: rsa-de
algorithm: RSASHA256
ksk-size: 2048
zsk-size: 1024
ksk-submission: tld_de
This seems to work fine, zone gets transferred from master (with low
serial), signed and with a new unixtime serial transferred out again.
I'm not sure if "zonefile-load: difference " makes any difference for
a slave zone but without it I get warnings about possibly malformed
IXFRs...
2) Inline DNSSEC for master zone from git:
zone:
- domain: dnssec-test.intern
serial-policy: unixtime
storage: "/var/lib/knot/master"
file: "%s.zone"
zonefile-sync: -1
dnssec-signing: on
dnssec-policy: rsa
acl: acl_ns1
zonefile-load: difference-no-serial
policy:
- id: rsa
algorithm: RSASHA256
ksk-size: 2048
zsk-size: 1024
This also works but I get warnings like this:
[dnssec-test.intern.] journal, discontinuity in changes history (1540307365 -> 28), dropping older changesets
Is this expected? Also I read in older threads that this might fill
up the journal. Is that still the case?
Best Regards
Sebastian
--
GPG Key: 0x58A2D94A93A0B9CE (F4F6 B1A3 866B 26E9 450A 9D82 58A2 D94A 93A0 B9CE)
'Are you Death?' ... IT'S THE SCYTHE, ISN'T IT? PEOPLE ALWAYS NOTICE THE SCYTHE.
-- Terry Pratchett, The Fifth Elephant
Hi,
I'm currently testing a KSK algorithm rollover with my zone. I changed
the signature scheme from RSA to ECDSA. Knot started adding new RRSIGs
and new keys and now waits for the new DS to be published at the
parent zone. One thing strikes me as odd though:
http://dnsviz.net/d/6v6.de/W9AmtA/dnssec/
Looking at the graph the new KSK (54879) is not signing anything right
now. Shouldn't it sign the DNSKEY records of the ZSKs so that the
chain stays intact when the DS record changed at the parent zone?
Kind Regards
Sebastian
--
GPG Key: 0x58A2D94A93A0B9CE (F4F6 B1A3 866B 26E9 450A 9D82 58A2 D94A 93A0 B9CE)
'Are you Death?' ... IT'S THE SCYTHE, ISN'T IT? PEOPLE ALWAYS NOTICE THE SCYTHE.
-- Terry Pratchett, The Fifth Elephant
Dear Volker,
it is true that the method for creating a CSK is not explicitly
mentioned in the documentation, we shall fix that. You can create a CSK
using our keymgr utility by specifying both 'ksk=yes' and 'zsk=yes'
parameters of the 'generate' command. E.g.
$ keymgr -c /path/to/knot.conf example.com. generate ksk=yes zsk=yes
remove=+1y
creates an immediately active CSK with a 1 year lifetime. You can check
out all possible timestamp settings in the docs
<https://www.knot-dns.cz/docs/2.7/singlehtml/index.html#generate-arguments>.
Please note that the geoip module is currently not very well integrated
into Knot's DNSSEC workflow. For instance, the only way to refresh
RRSIGs precomputed by the module is to reload it (knotc zone-reload).
One approach for now could be to create a CSK with a strong algorithm
(e.g. the default ECDSAP256SHA256) and a long lifetime, e.g. 1 year, and
to set the same lifetime for the RRSIGs. The policy configuration could
look like this:
policy:
- id: manual
manual: on
algorithm: ECDSAP256SHA256
rrsig-lifetime: 365d
zone:
- domain: example.com.
dnssec-signing: on
dnssec-policy: manual
Then you would only have to perform a manual key rollover while
reloading the zone so that the module computes the new signatures. We
will update the documentation to include this information. In addition,
you can sync the key from one server to others by copying the KASP lmdb
database e.g. using the mdb_dump and mdb_load tools. If you have any
further questions, let us know!
Best regards,
Mark
On 19.10.2018 10:13, Volker Janzen wrote:
> Hi all,
>
> I'd like to test the geoip module with a signed zone. The
> documentation recommends using manual mode for signing. As far as I
> know, the geoip information is not transferred via AXFR. That would
> mean, that I've to transfer the signing key to the secondary servers
> along with the geoip (and zone) configuration. To reduce caveats with
> ZSKs, I'd like to use CSK. As a result I just need to sync one key per
> zone to secondary servers. I checked the Knot documentation on how to
> use a CSK for a zone, but the CSK is only mentioned twice in the
> documentation with no example on how to actually use it. Can someone
> point me to a configuration example for setting up a CSK?
>
>
> Kind regards,
> Volker
Hi all,
I'd like to test the geoip module with a signed zone. The documentation
recommends using manual mode for signing. As far as I know, the geoip
information is not transferred via AXFR. That would mean, that I've to
transfer the signing key to the secondary servers along with the geoip
(and zone) configuration. To reduce caveats with ZSKs, I'd like to use
CSK. As a result I just need to sync one key per zone to secondary
servers. I checked the Knot documentation on how to use a CSK for a
zone, but the CSK is only mentioned twice in the documentation with no
example on how to actually use it. Can someone point me to a
configuration example for setting up a CSK?
Kind regards,
Volker
Hi,
I'm using a zone with DNSSEC signing enabled that is updated using DDNS.
The update procedure is very simple and looks like this:
==> test_ddns.sh <==
#! /bin/sh
ZONE="example.org."
cat << EOF | nsupdate
server localhost
zone ${ZONE}
update delete ${ZONE} A
update add ${ZONE} 60 IN A 127.0.0.1
send
quit
EOF
And the corresponding output in the knot log is this:
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DDNS, processing 1 updates
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DNSSEC, zone is up-to-date
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DNSSEC, next signing at 1970-01-01T01:00:00
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DDNS, finished, no changes to the zone were made
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DDNS, processing 1 updates
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DNSSEC, successfully signed
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DNSSEC, next signing at 2018-10-24T22:58:46
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DDNS, update finished, serial 1539809849 -> 1539809926, 0.02 seconds
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DDNS, processing 1 updates
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DNSSEC, zone is up-to-date
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DNSSEC, next signing at 1970-01-01T01:00:00
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] DDNS, finished, no changes to the zone were made
Okt 17 22:58:46 backroad knotd[14134]: info: [example.org.] zone file updated, serial 1539809849 -> 1539809926
I'm wondering if the "next signing at 1970-01-01T01:00:00" output is correct
as this seems suspicious to me.
Running "knotc zone-status example.org" gives the following output:
[example.org.] role: master | serial: 1539809926 | transaction: none | freeze: no | refresh: not scheduled | update: not scheduled | expiration: not scheduled | journal flush: not scheduled | notify: not scheduled | DNSSEC re-sign: not scheduled | NSEC3 resalt: +29D23h53m28s | parent DS query: not scheduled
Is this expected behavior or a bug in knot?
I can give more information or create a proper bugreport if needed.
I also recently had the problem that knot didn't respond to ddns updates until
it was restarted. I don't know if this is related or a different problem,
however I currently don't have more information about this.
Thanks,
Maxi
Hi Oliver,
> Ah, the mistake was that changing the dnssec-policy *and* dnssec-signing
> in one go does not insert the delete-CDS/CDNSKEY records since knot
> immediately stops all dnssec related actions. Thanks!
You at least want to have the special CDNSKEY record -signed- anyway ;)
> Am I right that, unlike the signing process (KSK submission attempts),
> there is no built-in functionality in knot, that takes care about the
> right time to remove the key material from the zone?
Yes. We didn't care much for this usecase, sorry. I guess it's not so
difficult to achieve this manually. We need to have automated just those
processes, that start automatically (e.g. KSK rollover).
> So, basically I should wait
> [propagation-delay] + [max TTL seen in zone/knot_soa_minimum]
> seconds until I manually remove the material.
No, you first need to check when your parent zone removed the DS record.
Afterwards wait for its TTL + propagation_delay.
BR,
Libor
Hi,
I am experimenting with latest knot and its wonderful dnssec autosigner
functionality. It works pretty nice but I am a bit lost in the unsign
process, my zone looks basically like this:
zone:
- domain: "domain.tld."
storage: "/home/oliver/knot/zones"
file: "sign.local"
zonefile-load: "difference"
dnssec-signing: "on"
dnssec-policy: "dnssec-policy"
serial-policy: "unixtime"
policy:
- id: "dnssec-policy"
zsk-lifetime: "2592000"
ksk-lifetime: "31536000"
propagation-delay: "0"
nsec3: "off"
ksk-submission: "local"
cds-cdnskey-publish: "always"
What is the safe way to turn off dnssec once the DS has been seen by
the resolver/knot?
I tried to do dnssec-signing: "off" but that did not change anything;
I also created a second policy called "unsign-policy" where I switched
cds-cdnskey-publish to "cds-cdnskey-publish".
I expected the CDNSKEY/CDS immediately turn into "0 3 0 AA==" and so on
since my propagation-delay is 0 (for faster test results...)
Thanks for any hints!
--
Oliver PETER oliver(a)gfuzz.de 0x456D688F
Hello,
Is there a way how to force AXFR for certain masters in the
configuration? I have a situation with one of master serveres where IXFR
fails - received response is "Format error". Knot does not fall back to
AFXR in this case and the zone is going to expire. Using zone-retransfer
can fix it. Transferring this zone using kdig is also ok.
I can see this with Knot 2.7.2 and Knot 2.7.3 as well.
BR
Ales
Hi,
Is it possible to restrict DNS64 module only for specific IPv6 subnets
in Knot Resolver? The reasoning behind this is that this would make it
possible to run DNS64 resolver on the same instance with the "normal"
resolver in a way that fake AAAA records are returned only to IPv6-only
clients whereas normal dual-stack or IPv4-only clients are served with
unmodified A records.
I found an issue [1] that seems to be related to the very same thing,
but I was left a little bit uncertain what the current situation is and
how this should/could be configured.
[1] https://gitlab.labs.nic.cz/knot/knot-resolver/issues/368
Cheers,
Antti
Hi folks,
sorry for the spam.. now with the right subject..
Maybe anybody can help me..
Is there any possibility to sign with more than one core ? The
"background-workers" parameter didn't help...
KnotDNS is using only one core for signing..
thanks a lot
best regards
--
Christian Petrasch
Senior System Engineer
DNS/Infrastructure
IT-Services
DENIC eG
Kaiserstraße 75-77
60329 Frankfurt am Main
GERMANY
E-Mail: petrasch(a)denic.de
Fon: +49 69 27235-429
Fax: +49 69 27235-239
http://www.denic.de
PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E 8841
549B E0AE
Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr. Jörg
Schweiger
Vorsitzender des Aufsichtsrats: Thomas Keller
Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
Frankfurt am Main
Hi,
Every time we switch DNSSEC on for a single zone, it iterates over all
zones (and logs something trivial about each). This appears to us as
not very efficient. Is there a reason for it? Following the
documentation we had not expected this behaviour,
https://www.knot-dns.cz/docs/2.6/html/configuration.html#zone-signing
We are running Knot DNS with ~3500 zones, so you can imagine this has a
bit of an impact.
Thanks,
-Rick
Hi,
Roland and I ran into a crashing condition for knotd 2.6.[689],
presumably caused by a race condition in the threaded use of PKCS #11
sessions. We use a commercial, replicated, networked HSM and not SoftHSM2.
WORKAROUND:
We do have a work-around with "conf-set server.background-workers 1" so
this is not a blocking condition for us, but handling our ~1700 zones
concurrency would be add back later.
PROBLEM DESCRIPTION:
Without this work-around, we see crashes quite reliably, on a load that
does a number of zone-set/-unset commands, fired by sequentialised knotc
processes to a knotd that continues to fire zone signing concurrently.
The commands are generated with the knot-aware option -k from ldns-zonediff,
https://github.com/SURFnet/ldns-zonediff
ANALYSIS:
Our HSM reports errors that look like a session handle is reused and
then repeatedly logged into, but not always, so it looks like a race
condition on a session variable,
27.08.2018 11:48:59 | [00006AE9:00006AEE] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:48:59 | [00006AE9:00006AEE] C_GetAttributeValue
| E: Error CKR_USER_NOT_LOGGED_IN occurred.
27.08.2018 11:48:59 | [00006AE9:00006AED] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:48:59 | [00006AE9:00006AED] C_GetAttributeValue
| E: Error CKR_USER_NOT_LOGGED_IN occurred.
27.08.2018 11:49:01 | [00006AE9:00006AED] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:49:01 | [00006AE9:00006AED] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:49:01 | [00006AE9:00006AED] C_GetAttributeValue
| E: Error CKR_USER_NOT_LOGGED_IN occurred.
27.08.2018 11:49:02 | [00006AE9:00006AEE] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:49:03 | [00006AE9:00006AEE] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:55:50 | [0000744C:0000744E] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
These errors stopped being reported with the work-around configured.
Until that time, we have crashes, of which the following dumps one:
Thread 4 "knotd" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffcd1bd700 (LWP 27375)]
0x00007ffff6967428 in __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:54
54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff6967428 in __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:54
#1 0x00007ffff696902a in __GI_abort () at abort.c:89
#2 0x00007ffff69a97ea in __libc_message (do_abort=do_abort@entry=2,
fmt=fmt@entry=0x7ffff6ac2ed8 "*** Error in `%s': %s: 0x%s ***\n") at
../sysdeps/posix/libc_fatal.c:175
#3 0x00007ffff69b237a in malloc_printerr (ar_ptr=,
ptr=,
str=0x7ffff6ac2fe8 "double free or corruption (out)", action=3) at
malloc.c:5006
#4 _int_free (av=, p=, have_lock=0) at
malloc.c:3867
#5 0x00007ffff69b653c in __GI___libc_free (mem=) at
malloc.c:2968
#6 0x0000555555597ed3 in ?? ()
#7 0x00005555555987c2 in ?? ()
#8 0x000055555559ba01 in ?? ()
#9 0x00007ffff7120338 in ?? () from /usr/lib/x86_64-linux-gnu/liburcu.so.4
#10 0x00007ffff6d036ba in start_thread (arg=0x7fffcd1bd700) at
pthread_create.c:333
#11 0x00007ffff6a3941d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:109
DEBUGGING HINTS:
Our suspicion is that you may not have set the mutex callbacks when
invoking C_Initialize() on PKCS #11, possibly due to the intermediate
layers of abstraction hiding this from view. This happens more often.
Then again, the double free might pose another hint.
This is on our soon-to-go-live platform, so I'm afraid it'll be very
difficult to do much more testing, I hope this suffices for your debugging!
I hope this helps Knot DNS to move forward!
-Rick
Hi folks,
maybe anybody can help me..
Is there any possibility to sign with more than one core ? The
"background-workers" parameter didn't help...
KnotDNS is using only one core for signing..
thanks a lot
best regards
--
Christian Petrasch
Senior System Engineer
DNS/Infrastructure
IT-Services
DENIC eG
Kaiserstraße 75-77
60329 Frankfurt am Main
GERMANY
E-Mail: petrasch(a)denic.de
http://www.denic.de
PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E 8841
549B E0AE
Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr. Jörg
Schweiger
Vorsitzender des Aufsichtsrats: Thomas Keller
Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
Frankfurt am Main
Hi admin,
I found your knot-dns is really amazing software but I have a issue during
master & slaves configuration . While I configuring any domain zone in
master.
The zone details not propagate to slave until and unless I manually
specifies the domain name in slave zone.
Is there any way to configure this .
I hope your reply soon after you receive the email. I'm doing it for my
personal use and demo.
Regard
Innus Ali
>From : India
Hi admin,
I found your knot-dns is really amazing software but I have a issue during
master & slaves configuration . While I configuring any domain zone in
master.
The zone details not propagate to slave until and unless I manually
specifies the domain name in slave zone.
Is there any way to configure this .
I hope your reply soon after you receive the email. I'm doing it for my
personal use and demo.
Regard
Innus Ali
>From : India
Hello,
I have an issue with a zone where KNOT is slave server. I am not able to
transfer a zone: refresh, failed (no usable master). BIND is able to
transfer this zone and with host command AXFR works as well. There are
more domains on this master and the others are working. The thing is
that I can see in Wireshark that the AXFR is started, zone transfer
starts and for some reason KNOT after the 1st ACK to AXFR response
terminates the TCP connection with RST resulting in AXFR fail. AXFR
response is spread over several TCP segments.
I can provide traces privately.
KNOT 2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52
Thanks for help.
BR
Ales Rygl
Dear all,
I use knot 2.7.1 with automatic DNSSEC signing and key management.
For some zones I have used "cds-cdnskey-publish: none".
As .CH/.LI is about to support CDS/CDNSKEY (rfc8078, rfc7344) I thought
I should enable to publish the CDS/CDNSKEY RR for all my zones. However,
the zones which are already secure (trust anchor in parent zone) do not
publish the CDS/CDNSKEY record when the setting is changes to
"cds-cdnskey-publish: always".
I have not been able to reproduce this error on new zones or new zones
signed and secured with a trust anchor in the parent zone for which I
then change the cds-cdnskey-publish setting from "none" to "always".
This indicates that there seems to be some state error for my existing
zones only.
I tried but w/o success:
knotc zone-sign <zone>
knotc -f zone-purge +journal <zone>
; publish a inactive KSK
keymgr <zone> generate ... ; knotc zone-sign <zone>
Completely removing the zone (and all keys) and restarting fixes the
problem obviously. However, I cannot do this for all my zones as I would
have to remove the DS record in the parent zone prior to this...
Any idea?
Daniel
Hi all,
I would like to kindly ask you to check the Debian repository state? It
looks like it is a bit outdated... The latest version available is
2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52 while 2.7.0 has been
already released.
Thanks
BR
Ales Rygl
Hey,
We're scripting around Knot, and for that we pipe sequences of commands
to knotc. We're running into a few wishes for improved rigour that look
like they are generic:
1. WAITING FOR TRANSACTION LOCKS
This would make our scripts more reliably, especially when we need to do
manual operations on the command line as well. There should be no hurry
for detecting lock freeing operations immediately, so retries with
exponential backoff would be quite alright for us.
Deadlocks are an issue when these are nested, so this would at best be
an option to knotc, but many applications call for a single level and
these could benefit from the added sureness of holding the lock.
2. FAILING ON PARTIAL OPERATIONS
When we script a *-begin, act1, act2, *-commit, and pipe it into knotc
it is not possible to see intermediate results. This could be solved
when any failures (including for non-locking *-begin) would *-abort and
return a suitable exit code. Only success in *-commit would exit(0) and
that would allow us to detect overall success.
We've considered making a wrapper around knotc, but that might actually
reduce its quality and stability, so instead we now propose these features.
Just let me know if you'd like to see the above as a patch (and a repo
to use for it).
Cheers,
-Rick
Hello,
I am seeing segfault crashes from knot + libknot7 version 2.6.8-1~ubuntu
for amd64, during a zone commit cycle. The transaction is empty by the
way, but in general we use a utility to compare Ist to Soll.
This came up while editing a zone that hasn't been configured yet, so we
are obviously doing something strange. (The reason is I'm trying to
switch DNSSEC on/off in a manner orthogonal to the zone data transport,
which is quite clearly not what Knot DNS was designed for. I will post
a feature request that could really help with orthogonality.)
I'll attach two flows, occurring virtually at the same time on our two
machines while doing the same thing locally; so the bug looks
reproducable. If you need more information, I'll try to see what I can do.
Cheers,
-Rick
Jul 24 14:22:59 menezes knotd[17733]: info: [example.com.] control,
received command 'zone-commit'
Jul 24 14:22:59 menezes kernel: [1800163.196199] knotd[17733]: segfault
at 0 ip 00007f375a659410 sp 00007ffde37d46d8 error 4 in
libknot.so.7.0.0[7f375a64b000+2d000]
Jul 24 14:22:59 menezes systemd[1]: knot.service: Main process exited,
code=killed, status=11/SEGV
Jul 24 14:22:59 menezes systemd[1]: knot.service: Unit entered failed state.
Jul 24 14:22:59 menezes systemd[1]: knot.service: Failed with result
'signal'.
Jul 24 14:22:59 vanstone knotd[6473]: info: [example.com.] control,
received command 'zone-commit'
Jul 24 14:22:59 vanstone kernel: [3451862.795573] knotd[6473]: segfault
at 0 ip 00007ffb6e817410 sp 00007ffd2b6e1d58 error 4 in
libknot.so.7.0.0[7ffb6e809000+2d000]
Jul 24 14:22:59 vanstone systemd[1]: knot.service: Main process exited,
code=killed, status=11/SEGV
Jul 24 14:22:59 vanstone systemd[1]: knot.service: Unit entered failed
state.
Jul 24 14:22:59 vanstone systemd[1]: knot.service: Failed with result
'signal'.
Hi,
after updating from 2.6.8 to 2.7.0 none of my zones gets loaded:
failed to load persistent timers (invalid parameter)
error: [nord-west.org.] zone cannot be created
How can I fix this?
Kind Regards
Bjoern
Hi all,
I would kindly ask for help. After a tiny zone record modification I am
receiving following error(s) when trying to access zone data (zone-read):
Aug 02 15:09:34 idunn knotd[779]: warning: [xxxxxxxx.] failed to update
zone file (not enough space provided)
Aug 02 15:09:34 idunn knotd[779]: error: [xxxxxxx.] zone event 'journal
flush' failed (not enough space provided)
There is a plenty of space on the server, I suppose it is related to
journal and db.
Many thanks in advance, it is quite important zone.
KNOT 2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52
BR
Ales Rygl
Hi,
I would like to ask about the implementation of the Resource records in
RRSet in KnotDNS.
I have the domain with the three TXT record with same class IN for the
same label ('@') and with the different TTLs. In nsd and bind DNS
servers seems everything fine, but in KnotDNS I got the warning and error:
knotd[551]: warning: [xxxxxxx.xxx.] zone loader, RRSet TTLs mismatched,
node 'xxxxxxx.xxx.' (record type TXT)
knotd[551]: error: [xxxxxxx.xxx.] zone loader, failed to load zone, file
'/etc/knot/files/master.gen/xxxxxxx.xxx' (TTL mismatch)
knotd[551]: error: [xxxxxxx.xxx.] failed to parse zonefile (failed)
knotd[551]: error: [xxxxxxx.xxx.] zone event 'load' failed (failed)
Is it a correct behavior and other DNS servers don't check it or is it a
bug in KnotDNS?
Thank you for reply.
Cheers,
--
Zdenek
Hi,
I am trying to make DNSSEC signing orthogonal to zone data transport in
the DNSSEC signer solution for SURFnet. This translates directly to an
intuitive user interface, where domain owners can toggle DNSSEC on and
off with a flick of a switch.
Interestingly, keymgr can work orthogonally to zone data; keys can be
added and removed, regardless of whether a zone has been setup in Knot DNS.
Where the orthogonality is broken, is that I need to explicitly set
dnssec-signing: to on or off. This means that I need to create a zone,
just to be able to tell Knot DNS about the keys. Of course there are
complaints when configuring Knot DNS without a zone data file present.
The most elegant approach would be to setup dnssec-signing as
opportunistic option, meaning "precisely then when there are keys
available in the keymgr for this zone". Such a setting could then end
up in the policy for any such zone, and that can be done when the zone
data is first sent, without regards of what we try to make an orthogonal
dimension.
I have no idea if this is difficult to make. I do think it may be a use
case that wasn't considered before, which is why I'm posting it here.
If this is easy and doable, please let me know; otherwise I will have to
work around Knot DNS (ignoring errors, overruling previously set content
just to be sure it is set, and so on) to achieve the desired orthogonality.
Cheers,
-Rick
Hello,
We're building a replicated Signer machine, based on Knot DNS. We have
a PKCS #11 backend for keys, and replication working for it.
On one machine we run
one# keymgr orvelte.nep generate ...
and then use the key hash on the other machine in
two# keymgr orvelte.nep share ...
This, however, leads to a report that the identified key could not be
found. Clearly, there is more to the backing store than just the key
material in PKCS #11.
What is the thing I need to share across the two machines, and how can I
do this?
Thanks,
-Rick
We're experiencing occasional failures with Knot crashing while running as a slave. The behavior is as follows: the slave will run for 2 months or so and then segfault. Our system automatically restarts the process, but after 15 minutes or less, the segfault happens again. This repeats until we remove the /var/lib/knot/journal and /var/lib/knot/timers directories. This seems to fix it up for a while: a newly started process will run fine for another couple of months.
More details on our setup: These systems serve a little less than a hundred zones, some of which change at a rapid rate. We have configured the servers to not flush the zone data to regular files. The server software is 2.5.7, but with the changes from the "ecs-patch" branch applied.
A while back, I tried a release from the newer branch (I'm pretty sure it was 2.6.4), but I had a problem there where some servers were falling behind the master, as evidenced by their SOA serial number. Diagnosing this on a more recent branch probably makes more sense, but I'd be a little leery of dealing with two problems, not just one.
I can provide various data: the (gigantic) seemingly "corrupt" journal/timer files and the segfault messages from the syslog. I don't have any coredumps, but I'll turn those on today. Given the nature of the problem, it might take a while for it to manifest.
Chuck
Hello
How can I dump a zone stored in Knot DNS to a file?
DNSSEC signed zones are overwritten, apparently using a zone dump functionality; noticable by the comment ";; Zone dump (Knot DNS 2.6.3)".
Regards
Hi, just getting up to speedon knotDNS and trying to get dynamically
added secondaries working via bootstrapping.
My understanding is when the server receives a notify from an authorized
master, if it is not already in the zone like it will add it and AXFR
it, right?
In my conf:
acl:
- id: "acl_master"
address: "64.68.198.83"
address: "64.68.198.91"
action: "notify"
remote:
- id: "master"
address: "64.68.198.83@53"
address: "64.68.198.91@53"
But whenever I send NOTIFY from either of those masters, nothing happens
on the knotDNS side. I have my logging as:
log:
- target: "syslog"
any: "debug"
Thx
- mark
Hello,
I'm trying to use Knot 2.6.7 in a configuration where zone files are
preserved (including comments, ordering and formatting) yet at the same
time Knot performs DNSSEC signing – something similar to inline-signing
feature by BIND. My config file looks like this:
policy:
- id: ecdsa_fast
nsec3: on
ksk-shared: on
zsk-lifetime: 1h
ksk-lifetime: 5h
propagation-delay: 10s
rrsig-lifetime: 2h
rrsig-refresh: 1h
template:
- id: mastersign
file: "/etc/knot/%s.zone"
zonefile-sync: -1
zonefile-load: difference
journal-content: all
dnssec-signing: on
dnssec-policy: ecdsa_fast
serial-policy: unixtime
acl: acl_slave
zone:
- domain: "example.com."
template: mastersign
It seems to work well for the first run, I can see that zone got signed
properly:
>
> # kjournalprint /var/lib/knot/journal/ example.com
> ;; Zone-in-journal, serial: 1
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1 3600 900 1814400 60
> example.com. 60 NS knot.example.com.
> first.example.com. 60 TXT "first"
> ;; Changes between zone versions: 1 -> 1529578258
> ;;Removed
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1 3600 900 1814400 60
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1529578258 3600 900 1814400 60
> example.com. 0 CDNSKEY 257 3 13
> …lots of DNSSEC data.
However, if I try to update the unsigned zone file, strange things
happen. If I just add something to a zone and increase the serial, I get
these errors in the log:
>
> Jun 21 13:00:08 localhost knotd[2412]: warning: [example.com.] zone file changed, but SOA serial decreased
> Jun 21 13:00:08 localhost knotd[2412]: error: [example.com.] zone event 'load' failed (value is out of range)
If I set the serial to be higher than the serial of last signed zone, I
get a slightly different error:
>
> Jun 21 13:22:36 localhost knotd[3096]: warning: [example.com.] journal, discontinuity in changes history (1529580085 -> 1529580084), dropping older changesets
> Jun 21 13:22:36 localhost knotd[3096]: error: [example.com.] zone event 'load' failed (value is out of range)
In either case, when I look into the journal after the reload of the
zone, I see just the unsigned zone:
> # kjournalprint /var/lib/knot/journal/ example.com
> ;; Zone-in-journal, serial: 2
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 2 3600 900 1814400 60
> example.com. 60 NS knot.example.com.
> first.example.com. 60 TXT "first"
> second.example.com. 60 TXT "second"
Yet the server keeps serving the previous signed zone no matter what I
try. The only thing that help is a cold restart of Knot, when the zone
gets signed again.
So this approach is obviously not working as expected. If I comment out
option `zonefile-load: difference`, I get somehow working solution where
zone is completely resigned during each reload and I get this warning to
the log:
> Jun 21 13:27:38 localhost knotd[3156]: warning: [example.com.] with automatic DNSSEC signing and outgoing transfers enabled, 'zonefile-load: difference' should be set to avoid malformed IXFR after manual zone file update
I guess this should not bother me a lot as log as I keep serial numbers
of unsigned zones significantly different from signed ones. The only
problem is that this completely kills IXFR transfers as well as signing
only differences.
So far the only solution I see is to run two instances of Knot, one
reading the zone file from disk without signing, transferring it to
another instance which would do the signing is slave mode.
Is there anything I'm missing here?
Sorry for such a long e-mail and thank you for reading all the way here.
Best regards,
Ondřej Caletka
Hi!
One of our customers uses Knot 2.6.7 as hidden master which sends
NOTIFYs to our slave service. He reported that Knot can not send the
NOTIFYs, ie:
knotd[10808]: warning: [example.com.] notify, outgoing,
2a02:850:8::6@53: failed (connection reset)
It seems that Knot sometimes tries to send the NOTIFY with TCP (I see
also NOTIFYs via UDP). Unfortunatelly our NOTIFY-receiver only supports UDP.
So, this is the first time seeing a name server sending NOTIFYs over
TCP. Is this a typical behavior in Knot? Can I force Knot to send
NOTIFYs always over UDP?
Thanks
Klaus
Hello
I am using ecdsap256sha256 as algorithm. Why does the KSK DNSKEY (=257) use as digest type SHA1 (=1) and not SHA256 (=2)?
For example:
> dig DNSKEY nic.cz | grep 257
nic.cz. 871 IN DNSKEY 257 3 13 LM4zvjUgZi2XZKsYooDE0HFYGfWp242fKB+O8sLsuox8S6MJTowY8lBD jZD7JKbmaNot3+1H8zU9TrDzWmmHwQ==
> dig DNSKEY nic.cz | grep 257 > dnspub.key
> jdnssec-dstool dnspub.key
nic.cz. 868 IN DS 61281 13 1 091CECC4D2AADB7AC8C4DF413DDF9C5B0B61E5B6
Regards
dp
Hello all,
my Knot DNS is now in production and I would like to setup some
backup tasks for the configuration and of course keys. Are there any
recommendations regarding backup? And restore? I can see that it is very
easy to dump current config but I am to sure how to backup keys. What do
you recommend? Save the content of /var/lib/knot on a hourly/daily
basis? I am not using shared keys.
Thanks
With regards
Ales
Hi
I have a question about this commit
“01b00cc47efe”
Replace select() by poll()?
The performance of epoll is better then select/poll when monitoring large
numbers of file descriptions.
Could you please let me why you choose poll()?
Thanks for your reply!
Dear all,
While trying to migrate or DNS to Knot I have noticed that a slave
server with 2GB RAM is facing memory exhaustion. I am running
2.6.5-1+0~20180216080324.14+stretch~1.gbp257446. There is 141 zones
having around 1MB in total. Knot is acting as pure slave server with
minimal configuration.
There is nearly 1.7GB of memory consumed by Knot on a freshly rebooted
server:
root@eira:/proc/397# cat status
Name: knotd
Umask: 0007
State: S (sleeping)
Tgid: 397
Ngid: 0
Pid: 397
PPid: 1
TracerPid: 0
Uid: 108 108 108 108
Gid: 112 112 112 112
FDSize: 64
Groups: 112
NStgid: 397
NSpid: 397
NSpgid: 397
NSsid: 397
VmPeak: 24817520 kB
VmSize: 24687160 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 1743400 kB
VmRSS: 1743272 kB
RssAnon: 1737088 kB
RssFile: 6184 kB
RssShmem: 0 kB
VmData: 1781668 kB
VmStk: 132 kB
VmExe: 516 kB
VmLib: 11488 kB
VmPTE: 3708 kB
VmPMD: 32 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
Threads: 21
SigQ: 0/7929
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: fffffffe7bfbbefc
SigIgn: 0000000000000000
SigCgt: 0000000180007003
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Seccomp: 0
Cpus_allowed: f
Cpus_allowed_list: 0-3
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 260
nonvoluntary_ctxt_switches: 316
root@eira:/proc/397#
Config:
server:
listen: 0.0.0.0@53
listen: ::@53
user: knot:knot
log:
- target: syslog
any: info
mod-rrl:
- id: rrl-10
rate-limit: 10 # Allow 200 resp/s for each flow
slip: 2 # Every other response slips
mod-stats:
- id: custom
edns-presence: on
query-type: on
request-protocol: on
server-operation: on
request-bytes: on
response-bytes: on
edns-presence: on
flag-presence: on
response-code: on
reply-nodata: on
query-type: on
query-size: on
reply-size: on
template:
- id: default
storage: "/var/lib/knot"
module: mod-rrl/rrl-10
module: mod-stats/custom
acl: [allowed_transfer]
disable-any: on
master: idunn
I was pretty sure that a VM with 2GB RAM is enough for my setup :-)
BR
Ales
Hello all,
I noticed that Knot (2.6.5) creates an RRSIG for the CDS/CDNSKEY RRset
with the ZSK/CSK only.
I was wondering if this is an acceptable behavior as RFC 7344, section
4.1. CDS and CDNSKEY Processing Rules states:
o Signer: MUST be signed with a key that is represented in both the
current DNSKEY and DS RRsets, unless the Parent uses the CDS or
CDNSKEY RRset for initial enrollment; in that case, the Parent
validates the CDS/CDNSKEY through some other means (see
Section 6.1 and the Security Considerations).
Specifically I read that "represented in both the current DNSKEY and DS
RRsets" means that the CDS/CDNSKEY RRset must be signed with a KSK/CSK
and not only with a ZSK and a trust chain to KSK <- DS.
I tested both BIND 9.12.1 and PowerDNS Auth 4.0.5 as well. PowerDNS Auth
behaves the same as Knot 2.6.5 but BIND 9.12.1 always signs the
CDS/CDNSKEY RRset with at least the KSK.
Do I read the RFC rule too strict? To be honest, I see nothing wrong
with the CDS/CDNSKEY RRset only signed by the ZSK but BINDs behavior and
the not so clear RFC statement keeps me wondering.
Thanks,
Daniel
I'm getting started with knot resolver and am a bit unclear as to how this
config should be structured.
The result I'm looking for is to forward queries to resolver A if the
source is subnet A; unless the query is for the local domain if so then
query the local DNS.
I've been working with the config below to accomplish this. However I'm
finding that this config will if the request does not match the local
todname and will use root hints if not but will not use the FORWARD server.
Ultimately, this server will resolve DNS for several subnets and will
forward queries to different servers based on the source subnet.
Would someone mind pointing me in the right direction on this, please?
for name, addr_list in pairs(net.interfaces()) do
net.listen(addr_list)
end
-- drop root
user('knot', 'knot')
-- Auto-maintain root TA
modules = {
'policy', -- Block queries to local zones/bad sites
'view', --view filters
'hints', -- Load /etc/hosts and allow custom root hints
'stats',
}
-- 4GB local cache for record storage
cache.size = 4 * GB
--If the request is from eng subnet
if (view:addr('192.168.168.0/24')) then
if (todname('localnet.mydomain.com')) then
policy.add(policy.suffix(policy.FORWARD('192.168.168.1'),
{todname('localnet.mydomain.com')}))
else
view:addr('192.168.168.0/24', policy.FORWARD('68.111.106.68'))
end
end
--
855.ONTRAPORT
ontraport.com
------------------------------
Get a Demo <https://ontraport.com/demo> | Blog <https://ontraport.com/blog>
| Free Tools <https://ontraport.com/tools>
zone-refresh [<zone>...] Force slave zone
refresh.
zone-retransfer [<zone>...] Force slave zone
retransfer (no serial check).
I would expect that retransfer does a complete AXFR. But instead it just
does sometimes a refresh:
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] AXFR, incoming, 2a02:111:9::5@53: starting
Seen with 2.6.3-1+ubuntu14.04.1+deb.sury.org+1
regards
Klaus
Hello,
Knot DNS looks awesome, thanks for that!
The benchmarks show a clear picture (for hosting) that the size of zones
doesn't matter, but DNSSEC does. I'm intruiged by the differences with NSD.
What is less clear, is what form of DNSSEC was used -- online signing,
or just signed for policy refreshes and updates, or signed before it
gets to knotd? This distinction seems important, as it might explain
the structural difference with NSD.
Also, the documentation speaks of "DNSSEC signing for static zones" but
leaves some doubt if this includes editing of the records using zonec
transactions, or if it relates to rosedb, or something else.
https://www.knot-dns.cz/docs/2.6/singlehtml/index.html#automatic-dnssec-sig…https://www.knot-dns.cz/docs/2.6/singlehtml/index.html#rosedb-static-resour…
Other thant his uncertainty (and confusion over the meaning of the
master: parameter) the documentation is a real treat. Thanks for a job
done well!
Best wishes,
-Rick
Hi,
After upgrading our fleet of slave servers from 2.5.4 to 2.6.4, I
noticed that, on a few slaves, a large zone that changes rapidly is
now consistently behind the master to a larger degree that we consider
normal. By "behind", I mean that the serial number reported by the
slave in the SOA record is less than reported the master server.
Normally we expect small differences between the serial on the master
and the slaves because our zones change rapidly. These differences are
often transient. However, after the upgrade, a subset of the slaves (always
the same ones) have a much larger difference. Fortunately, the difference
does not increase without bound.
The hosts in question seem powerful enough: one has 8 2gHz Xeons and
32G RAM, which is less powerful than some of the hosts that are
keeping up. It may be more a case of their connectivity. Two of the
affected slaves are in the same location.
For now, I've downgraded these slaves back to 2.5.4, and they are able
to keep up again.
Is there a change that would be an obvious culprit for this or is
something that we could tune? One final piece of information: We
always apply the change contained in the ecs-patch branch (which
returns ECS data if the client requests it). I don't know if the
effect of this processing is significant. We do need it as part of
some ongoing research we're conducting.
Chuck
Hello,
I plan to use Docker to deploy Knot-DNS.
I am going to copy all the zones configurations in the Docker image.
Then I will start two containers with two different ip addresses.
In this case, is it necessary to configure acl and remote section related
to master / slave replication?
I don't think so because both IP will reply with excatly the same zone
configuration but please give me your opinion ?
Regards,
Gael
--
Cordialement, Regards,
Gaël GIRAUD
ATLANTIC SYSTÈMES
Mobile : +33603677500
Dear Knot Resolver users,
please note that Knot Resolver now has its dedicated mailing list:
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-users
For further communication regarding Knot Resolver please subscribe to
this list. We will send new version announcements only to the new
mailing list.
--
Petr Špaček @ CZ.NIC
Hello all,
We had a weird issue with Knot serving an old version of a zone after a server reboot. After the reboot, our monitoring alerted that the zone was out of sync. Knot was then serving an older version of the zone (the zone did not update during the reboot, Knot was serving a version of the zone that was older than what it had before the reboot). The zone file on the disk had the correct serial, and knotc zone-status <zone> showed the current serial as well. However, dig @localhost soa <zone> on that box, showed the old serial. Running knotc zone-refresh <zone> didn't help, as in the logs when it went to do the refresh, it showed 'zone is up-to-date'. Running knotc zone-retransfer also did not resolve the problem, only a restart of the knotd process resolved this issue. While we were able to resolve this ourselves, it is certainly a strange issue and we were wondering if we could get any input on this.
Command output:
[root@ns02 ~]# knotc
knotc> zone-status <zone>
[<zone>] role: slave | serial: 2017121812 | transaction: none | freeze: no | refresh: +3h59m42s | update: not scheduled | expiration: +6D23h59m42s | journal flush: not scheduled | notify: not scheduled | DNSSEC re-sign: not scheduled | NSEC3 resalt: not scheduled | parent DS query: not scheduled
knotc> exit
[root@ns02 ~]# dig @localhost soa <zone>
…
… 2017090416 …
…
Logs after retransfer and refresh:
Jan 15 16:49:22 ns02 knot[7187]: info: [<zone>] control, received command 'zone-refresh'
Jan 15 16:49:22 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] control, received command 'zone-retransfer'
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] AXFR, incoming, <master>@53: starting
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] AXFR, incoming, <master>@53: finished, 0.00 seconds, 1 messages, 5119 bytes
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: zone updated, serial none -> 2017121812
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:53:03 ns02 knot[7187]: info: [<zone>] control, received command 'zone-status'
And a dig after that:
[root@ns02 ~]# dig @localhost soa crnet.cr
…
… 2017090416 …
…
-Rob
Hi,
I wrote a collectd plugin which fetches the metrics from "knotc
[zone-]status" directly from the control socket.
The code is still a bit work in progress but should be mostly done. If
you want to try it out, the code is on Github, feedback welcome:
https://github.com/julianbrost/collectd/tree/knot-pluginhttps://github.com/collectd/collectd/pull/2649
Also, I'd really like some feedback on how I use libknot, as I only
found very little documentation on it. If you have any questions, just ask.
Regards,
Julian
Hi!
I installed the Knot 2.6.3 packages from PPA on Ubuntu 14.04. This
confuses the syslog logging. I am not sure but as I think the problem is
that Knot requires systemd for logging.
The problem is, that I do not see any logging of Knot in my
syslogserver, only in journald. Is this something special in Knot that
the logging is not forwarded to syslog?
Is it possible to use your Ubuntu Packages without systemd logging?
I think it would be better to build the packages on non-systemd distros
(ie Ubuntu 14.04) without systemd dependencies.
Thanks
Klaus
Hi!
Knot 2.6.3: When an incoming NOTIFY does not match any ACL the NOTIFY is
replied with "notauth" although the zone is configured. I would have
expected that Knot should response with "refused" in such a scenario. Is
the notauth intended? From operational view a "refuses" would easy
debugging.
regards
Klaus
> key
>
> An ordered list of references to TSIG keys. The query must match one of them. Empty value means that TSIG key is not required.
>
> Default: not set
This is not 100% correct. At least with a notify ACL the behavior is:
Empty value means that TSIG keys are not allowed.
regards
Klaus
Hi everybody,
I would have a question related to zone signing. Whenever I reload knot config
using knotc reload it starts to resign all DNSSEC enabled zones. It makes the
daemon sometimes unresponsive to knotc utility.
root@idunn:# knotc reload
error: failed to control (connection timeout)
Is it a design intent to sign zones while reloading config? Is it really
needed? It invokes zone transfers, consumes resources, etc.
Thanks for answer
With regards
Ales
Helly everybody,
there is a KNOT DNS master name server that I do not manage myself for my domain. I try to setup a BIND DNS server as a slave in-house. BIND fails to do the zone transfer and reports
31-Dec-2017 16:19:02.503 zone whka.de/IN: Transfer started.
31-Dec-2017 16:19:02.504
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
connected using 2001:7c7:20e8:18e::2#53509
31-Dec-2017 16:19:02.505
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
failed while receiving responses: NOTAUTH
31-Dec-2017 16:19:02.505
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
Transfer completed: 0 messages, 0 records, 0 bytes, 0.001 secs
If try dig (this time using the IPv4 address), I get a failure, too.
# dig axfr @141.70.45.160 whka.de.
; <<>> DiG 9.9.5-9+deb8u7-Debian <<>> axfr @141.70.45.160 whka.de.
; (1 server found)
;; global options: +cmd
; Transfer failed.
Wireshark tells me that the reply code of the name server is `1001 Server is not an authority for domain`. What is going on here?
Especially, if I query the same nameserver for an usual A-record it claims to be authoritative. Moreover, KNOT DNS manual says KNOT is an authoritative-only name server. So there is no way of being non-authoritative.
Has anybody already observed something like this?
Best regards, Matthias
--
Evang. Studentenwohnheim Karlsruhe e.V. – Hermann-Ehlers-Kolleg
Matthias Nagel
Willy-Andreas-Allee 1, 76131 Karlsruhe, Germany
Phone: +49-721-96869289, Mobile: +49-151-15998774
E-Mail: matthias.nagel(a)hermann-ehlers-kolleg.de
Dear Knot Resolver users,
Knot Resolver 1.5.1 is released, mainly with bugfixes and cleanups!
Incompatible changes
--------------------
- script supervisor.py was removed, please migrate to a real process manager
- module ketcd was renamed to etcd for consistency
- module kmemcached was renamed to memcached for consistency
Bugfixes
--------
- fix SIGPIPE crashes (#271)
- tests: work around out-of-space for platforms with larger memory pages
- lua: fix mistakes in bindings affecting 1.4.0 and 1.5.0 (and
1.99.1-alpha),
potentially causing problems in dns64 and workarounds modules
- predict module: various fixes (!399)
Improvements
------------
- add priming module to implement RFC 8109, enabled by default (#220)
- add modules helping with system time problems, enabled by default;
for details see documentation of detect_time_skew and detect_time_jump
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v1.5.1/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.5.1.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.5.1.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v1.5.1/
--Vladimir
Hello guys,
there has been a request in our issue tracker [1], to enable
IPV6_USE_MIN_MTU socket option [2] for IPv6 UDP sockets in Knot DNS.
This option makes the operating system to send the responses with a
maximal fragment size of 1280 bytes (minimal MTU size required by IPv6
specification).
The reasoning is based on the draft by Mark Andrews from 2012 [3]. I
wonder if the reasoning is still valid in 2016. And I'm afraid that
enabling this option could enlarge the window for possible DNS cache
poisoning attacks.
We would appreciate any feedback on your operational experience with DNS
on IPv6 related to packet fragmentation.
[1] https://gitlab.labs.nic.cz/labs/knot/issues/467
[2] https://tools.ietf.org/html/rfc3542#section-11.1
[3] https://tools.ietf.org/html/draft-andrews-dnsext-udp-fragmentation-01
Thanks and regards,
Jan
Hi everybody,
Is there a way how to change TTL of all zone records at once using knotc? I.e.
without editing the zone file manually. Something what I can do using $TTL
directive in Bind9 zone files?
If not I would like to ask for implementing if possible.
Thanks
Regards
Ales Rygl
Dobry den,
narazil jsem na problem s fungovanim modulu mod-synthrecord pri pouziti vice
siti soucasne:
Konfigurace:
mod-synthrecord
- id: customers1
type: forward
prefix:
ttl: 300
network: [ 46.12.0.0/16, 46.13.0.0/16 ]
zone:
- domain: customers.tmcz.cz
file: db.customers.tmcz.cz
module: mod-synthrecord/customers1
S uvedenou konfiguraci Knot generuje pouze zaznamy z posledni uvedene site,
pro 46.12.0.0/16 dava NXDOMAIN. Stejne se to chova i s touto formou zapisu.
mod-synthrecord
- id: customers1
type: forward
prefix:
ttl: 300
network: 46.12.0.0/16
network: 46.13.0.0/16
Konfiguracne je to ok, knot neprotestuje, ale zaznamy negeneruje. Knot je
2.6.1-1+0~20171112193256.11+stretch~1.gbp3eaef0.
Diky za pomoc ci nasmerovani.
S pozdravem
Ales Rygl
On 11/20/2017 12:37 PM, Petr Kubeš wrote:
> Je prosím někde dostupna nějaká jednoduchá "kuchařka" pro zprovoznění
> takovéhoto DNS resolveru?
V některých systémech už je přímo balíček se službou, případně máme PPA
obsahující novější verze. https://www.knot-resolver.cz/download/
Vyhnul bych se verzím před 1.3.3.
Přímo kuchařku nemáme, ale kresd funguje dobře i bez konfigurace - pak
poslouchá na všech lokálních adresách na UDP+TCP portu 53, se 100 MB
cache v momentálním adresáři. Akorát pro validaci DNSSEC je potřeba
zadat jméno souboru s kořenovými klíči, třeba "kresd -k root.keys" - ten
je při neexistenci inicializován přes https. Různé možnosti jsou
popsány v dokumentaci
http://knot-resolver.readthedocs.io/en/stable/daemon.html
V. Čunát
Dobrý den, prosím o radu.
provozujeme malou síť a v současné době využíváme externí DNS
poskytovatele (UPC).
CHtěli by jsme na hraničním uzlu zprovoznit vlastní DNS , konkrétně KNOT
v konfiguraci, kdy by majoritně fungoval jako DNS RESOLVER a v budoucnu
případně dostal i naše zony.
Není prosím u vás někde dostupný návod step by step, co konkrétně
nastavit, aby jsme mohli úspěšně takovýto KNOT zprovoznit v několika
krocích jako CZ Resolver DNS?
Asi špatné období, nedaří se mi bohužel z dostupných manuálů, nebo
návodů systém KNOT dns nastavit tak, aby odpovídal a synchronizoval DNS
zóny.
Velice děkuji za radu
P.Kubeš
Dobry den,
Rad bych pozadal o radu. Experimentuji s Knot DNS, verze 2.6.0-3+0~20171019083827.9+stretch~1.gbpe9bd69. Debian Stretch.
Mam nasazeny DNSSEC s KSK a ZSK v algoritmu 5 a Bind9, klice bez metadat. Snazim se prejit na Knot, s tim, ze mam dve testovaci zony. Pouzivam nasledujici postup.
1. Naimportuji stavajici klice pomoci keymgr
2. nastavim timestamy:
keymgr t-sound.cz set 18484 created=+0 publish=+0 active=+0
keymgr t-sound.cz set 04545 created=+0 publish=+0 active=+0
3. zavedu zonu do Knotu. lifetime je extremne kratky, abych vedel, jak mi to funguje.
zone:
- domain: t-sound.cz
template: signed
file: db.t-sound.cz
dnssec-signing: on
dnssec-policy: migration
- domain: mych5.cz
template: signed
file: db.mych5.cz
dnssec-signing: on
dnssec-policy: migration
acl: [allowed_transfer]
notify: idunn-freya-gts
policy:
- id: migration
algorithm: RSASHA1
ksk-size: 2048
zsk-size: 1024
zsk-lifetime: 20m
ksk-lifetime: 10d
propagation-delay: 5m
Toto projde. Knot zacne podepisovat importovanymi klici. Nasledne zmenim policy u t-sound.cz na
policy:
- id: migration3
algorithm: ecdsap256sha256
zsk-lifetime: 20m
ksk-lifetime: 10d
propagation-delay: 5m
ksk-submission: nic.cz
Knot vygeneruje nove klice:
Nov 10 16:40:09 idunn knotd[21682]: warning: [t-sound.cz.] DNSSEC, creating key with different algorithm than policy
Nov 10 16:40:09 idunn knotd[21682]: warning: [t-sound.cz.] DNSSEC, creating key with different algorithm than policy
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, algorithm rollover started
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 18484, algorithm 5, KSK yes, ZSK no, public yes, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 5821, algorithm 5, KSK no, ZSK yes, public yes, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public no, ready no, active no
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39697, algorithm 13, KSK no, ZSK yes, public no, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing started
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-10T16:45:09
Rozbehne se mechanismus ZSK rolloveru, vypublikuje se CDNSKEY. Projde sumbission. Vysledny stav je, ze zona funguje,
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 22255, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing started
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-12T23:03:27
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] zone file updated, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] notify, outgoing, 93.153.117.50@53: serial 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: started, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: debug: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: finished, 0.00 seconds, 1 messages, 780 bytes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: started, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: debug: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: finished, 0.00 seconds, 1 messages, 780 bytes
ZSK se rotuji. Pak ale dojde k chybe nize:
Nov 12 23:03:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 12 23:03:27 idunn knotd[24980]: warning: [t-sound.cz.] DNSSEC, key rollover [1] failed (unknown error -28)
Nov 12 23:03:27 idunn knotd[24980]: error: [t-sound.cz.] DNSSEC, failed to initialize (unknown error -28)
Nov 12 23:03:27 idunn knotd[24980]: error: [t-sound.cz.] zone event 'DNSSEC resign' failed (unknown error -28)
Stav klicu v tomto okamziku:
root@idunn:/var/lib/knot# keymgr t-sound.cz list human
c87e00bd71d0f89ea540ef9c21020df1e0106c0f ksk=yes tag=04256 algorithm=13 public-only=no created=-2D16h24m21s pre-active=-2D16h24m21s publish=-2D16h19m21s ready=-2D16h14m21s active=-1D18h14m21s retire-active=0 retire=0 post-active=0 remove=0
fe9f432bfc5d527dc11520615d6e29e5d1799d8c ksk=no tag=22255 algorithm=13 public-only=no created=-10h26m3s pre-active=0 publish=-10h26m3s ready=0 active=-10h21m3s retire-active=0 retire=0 post-active=0 remove=0
root@idunn:/var/lib/knot#
knotc zone-sign t-sound.cz ale pojde a vse se tim opravi.
Nov 13 08:56:41 idunn knotd[24980]: info: [t-sound.cz.] control, received command 'zone-status'
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] control, received command 'zone-sign'
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, dropping previous signatures, resigning zone
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, ZSK rollover started
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 22255, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 24386, algorithm 13, KSK no, ZSK yes, public yes, ready no, active no
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing started
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-13T09:11:23
O den drive na tom knot zcela havaroval:
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39964, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing started
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 11 23:05:09 idunn systemd[1]: knot.service: Main process exited, code=killed, status=11/SEGV
Nov 11 23:05:09 idunn systemd[1]: knot.service: Unit entered failed state.
Nov 11 23:05:09 idunn systemd[1]: knot.service: Failed with result 'signal'.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Service hold-off time over, scheduling restart.
Nov 11 23:05:10 idunn systemd[1]: Stopped Knot DNS server.
Nov 11 23:05:10 idunn systemd[1]: Started Knot DNS server.
Nov 11 23:05:10 idunn knotd[23933]: info: Knot DNS 2.6.0 starting
Nov 11 23:05:10 idunn knotd[23933]: info: binding to interface 0.0.0.0@553
Nov 11 23:05:10 idunn knotd[23933]: info: binding to interface ::@553
Nov 11 23:05:10 idunn knotd[23933]: info: changing GID to 121
Nov 11 23:05:10 idunn knotd[23933]: info: changing UID to 114
Nov 11 23:05:10 idunn knotd[23933]: info: loading 2 zones
Nov 11 23:05:10 idunn knotd[23933]: info: [mych5.cz.] zone will be loaded
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] zone will be loaded
Nov 11 23:05:10 idunn knotd[23933]: info: starting server
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39964, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, signing started
Nov 11 23:05:10 idunn knotd[23933]: warning: [mych5.cz.] DNSSEC, key rollover [1] failed (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: error: [mych5.cz.] DNSSEC, failed to initialize (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: error: [mych5.cz.] zone event 'load' failed (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 11 23:05:10 idunn systemd[1]: knot.service: Main process exited, code=killed, status=11/SEGV
Nov 11 23:05:10 idunn systemd[1]: knot.service: Unit entered failed state.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Failed with result 'signal'.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Service hold-off time over, scheduling restart.
Nov 11 23:05:10 idunn systemd[1]: Stopped Knot DNS server.
Nov 11 23:05:10 idunn systemd[1]: Started Knot DNS server.
Delam nekde chybu? Omlouvam se za komplikovany a dlouhy popis.
Diky
S pozdravem
Ales Rygl
Dear Knot Resolver users,
Knot Resolver 1.99.1-alpha has been released!
This is an experimental release meant for testing aggressive caching.
It contains some regressions and might (theoretically) be even vulnerable.
The current focus is to minimize queries into the root zone.
Improvements
------------
- negative answers from validated NSEC (NXDOMAIN, NODATA)
- verbose log is very chatty around cache operations (maybe too much)
Regressions
-----------
- dropped support for alternative cache backends
and for some specific cache operations
- caching doesn't yet work for various cases:
* negative answers without NSEC (i.e. with NSEC3 or insecure)
* +cd queries (needs other internal changes)
* positive wildcard answers
- spurious SERVFAIL on specific combinations of cached records, printing:
<= bad keys, broken trust chain
- make check
- a few Deckard tests are broken, probably due to some problems above
+ unknown ones?
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v1.99.1-alpha/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.99.1-alpha.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.99.1-alpha.tar.xz…
Documentation (not updated):
http://knot-resolver.readthedocs.io/en/v1.4.0/
--Vladimir
Hello,
one more question:
What is the proper way of autostarting Knot Resolver 1.4.0 on systemd (Debian Stretch in my case) to be able to listen on interfaces other from localhost?
As per the Debian README I've set up the socket override.
# systemctl edit kresd.socket:
[Socket]
ListenStream=<my.lan.ip>:53
ListenDatagram=<my.lan.ip>:53
However after reboot the service doesn't autostart.
# systemctl status kresd.service
kresd.socket - Knot DNS Resolver network listeners
Loaded: loaded (/lib/systemd/system/kresd.socket; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kresd.socket.d
└─override.conf
Active: failed (Result: resources)
Docs: man:kresd(8)
Listen: [::1]:53 (Stream)
[::1]:53 (Datagram)
127.0.0.1:53 (Stream)
127.0.0.1:53 (Datagram)
<my.lan.ip>:53 (Stream)
<my.lan.ip>:53 (Datagram)
Oct 01 23:17:12 <myhostname> systemd[1]: kresd.socket: Failed to listen on sockets: Cannot assign requested address
Oct 01 23:17:12 <myhostname> systemd[1]: Failed to listen on Knot DNS Resolver network listeners.
Oct 01 23:17:12 <myhostname> systemd[1]: kresd.socket: Unit entered failed state.
To get it running I have to type in manually:
# systemctl start kresd.socket
I apologize, I am now to systemd and its socket activation so it's not clear to me whether service or socket or both have to be somehow set up to autostart or not.
Could anyone clarify this?
Also, this is also in the log (again, Debian default):
Oct 01 23:18:22 <myhostname> kresd[639]: [ ta ] keyfile '/usr/share/dns/root.key': not writeable, starting in unmanaged mode
The file has permissions 644 for root:root. Should this be owned by knot, or writeable by others?
Thanks!
--
Regards,
Thomas Van Nuit
Sent with [ProtonMail](https://protonmail.com) Secure Email.
Hello all,
I have come across an incompatibility with /usr/lib/knot/get_kaspdb and knot in relation to IPv6. Knot expects unquoted ip addresses however the kaspdb tool uses the python yams library which requires valid yams and therefore needs IPv6 addresses to be quoted strings as the contain ‘:’. This incompatibility causes issues when updating knot as the ubuntu packages from 'http://ppa.launchpad.net/cz.nic-labs/knot-dns/ubuntu’ call get_kaspdb during installation. The below gist shows how to recreate this issue. Please let me know if you need any further information
https://gist.github.com/b4ldr/bd549b4cf63a7d564299497be3ef868d
Thanks John
Dear Knot Resolver users,
Knot Resolver 1.4.0 has been released!
Incompatible changes
--------------------
- lua: query flag-sets are no longer represented as plain integers.
kres.query.* no longer works, and kr_query_t lost trivial methods
'hasflag' and 'resolved'.
You can instead write code like qry.flags.NO_0X20 = true.
Bugfixes
--------
- fix exiting one of multiple forks (#150)
- cache: change the way of using LMDB transactions. That in particular
fixes some cases of using too much space with multiple kresd forks (#240).
Improvements
------------
- policy.suffix: update the aho-corasick code (#200)
- root hints are now loaded from a zonefile; exposed as hints.root_file().
You can override the path by defining ROOTHINTS during compilation.
- policy.FORWARD: work around resolvers adding unsigned NS records (#248)
- reduce unneeded records previously put into authority in wildcarded answers
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.4.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.4.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.4.0.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/v1.4.0/
--Vladimir
Hi,
I'm getting an HTTP 500 error from https://gitlab.labs.nic.cz/ that says
"Whoops, something went wrong on our end". Can someone take a look at it
and fix it?
Thanks!
--
Robert Edmonds
edmonds(a)debian.org
Hi,
we have a DNSSEC enabled zone, for which knot serves RRSIGs with expire
date in the past (expired on Sept 13th) and signed by a no longer active
ZSK. The correct RRSIGs (uptodate and signed with the current ZSK) are
served as well, so the zone still works.
Is there a way to purge these outdated RRSIGs from the database?
Regards
André
Hi,
I maybe missed something. I created kasp direcotry, added knot as a
owner.
In the kasp directory (/var/lib/knot/kasp) runned commands:
keymgr init
keymgr zone add domena.cz policy none
keymgr zone key generate domena.cz algorithm rsasha256 size 2048 ksk
Cannot retrieve policy from KASP (not found).
Did I missed something ?
Thanks and best regards
J.Karliak
--
Bc. Josef Karliak
Správa sítě a elektronické pošty
Fakultní nemocnice Hradec Králové
Odbor výpočetních systémů
Sokolská 581, 500 05 Hradec Králové
Tel.: +420 495 833 931, Mob.: +420 724 235 654
e-mail: josef.karliak(a)fnhk.cz, http://www.fnhk.cz
Hello,
this might be a rather stupid question.
I have a fresh install of Debian Stretch with all updates and Knot Resolver 1.4.0. installed from CZ.NIC repositories. I've set up a rather simple configuration allowing our users to use the resolver and everything works fine (systemd socket override for listening on LAN). I have however noticed that kresd logs every single query into /var/log/syslog, generating approx. 1 MB/min worth of logs on our server. I've looked into documentation and haven't found any directive to control the logging behavior. Is there something I might be missing? I would preferrably like to see only warnings in the log.
Here's my config:
# cat /etc/knot-resolver/kresd.conf
net = { '127.0.0.1', '::1', '<my.lan.ip>' }
user('knot-resolver','knot-resolver')
cache.size = 4 * GB
Thanks for the help.
--
Regards,
Thomas Van Nuit
Sent with [ProtonMail](https://protonmail.com) Secure Email.
I've written aknot module, which is functioning well. I've been asked
to add functionality to it that would inhibit any response from knot,
based upon the client's identity. I know the identity; I just need to
figure out how to inhibit a response.
I just noticed the rrl module, and I looked at what it does. I emulated
what I saw and set pkt->size = 0 and returned KNOTD_STATE_DONE.
When I ran host -a, it returned that no servers could be reached. When
I ran dig ANY, I ultimately got the same response, but dig complained
three times about receiving a message that was too short in length.
I really want NO message to be returned. How do I force this?
Hi Volker,
thank you for your question.
Your suggestion is almost correct, just a little correction:
knotc zone-freeze $ZONE
# wait for possibly still running events (check the logs manually or so...)
knotc zone-flush $ZONE # eventually with '-f' if zone synchronization is
disabled in config
$EDITOR $ZONEFILE # you SHALL increase the SOA serial if any changes
made in zonefile
knotc zone-reload $ZONE
knotc zone-thaw $ZONE
Reload before thaw - because after thaw, some events may start
processing, making the modified zonefile reload problematic.
BR,
Libor
Dne 5.9.2017 v 23:17 Volker Janzen napsal(a):
> Hi,
>
> I've setup knot to handle DNSSEC signing for a couple of zones. I like to update zonefiles on disk with an editor and I want to clarify which steps need to be performed to safely edit the zonefile on disk.
>
> I currently try this:
>
> knotc zone-freeze $ZONE
> knotc zone-flush $ZONE
> $EDITOR $ZONE
> knotc zone-thaw $ZONE
> knotc zone-reload $ZONE
>
> As far as I can see knot increases the serial on reload and slaves will be notified.
>
> Is this the correct command sequence?
>
>
> Regards
> Volker
>
>
> _______________________________________________
> knot-dns-users mailing list
> knot-dns-users(a)lists.nic.cz
> https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-dns-users
Hi,
I've setup knot to handle DNSSEC signing for a couple of zones. I like to update zonefiles on disk with an editor and I want to clarify which steps need to be performed to safely edit the zonefile on disk.
I currently try this:
knotc zone-freeze $ZONE
knotc zone-flush $ZONE
$EDITOR $ZONE
knotc zone-thaw $ZONE
knotc zone-reload $ZONE
As far as I can see knot increases the serial on reload and slaves will be notified.
Is this the correct command sequence?
Regards
Volker
Hi,
Recently, we noticed a few of our Knot slaves repeatedy doing zone transfers. After enabling zone-related logging, these messages helped narrow down the problem:
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] journal: unable to make free space for insert
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] IXFR, incoming, 1.2.3.4@53: failed to write changes to journal (not enough space provided)
These failures apparently caused the transfers to occur over and over. Not all the zones being served showed up in these messages, but I'm pretty sure that the ones with a high rate of change were more likely to do so. I do know there was plenty of disk space. A couple of the tunables looked relevant:
max-journal-db-size: we didn't hit this limit (used ~450M of 20G limit, the default)
max-journal-usage: we might have hit this limit. The default is 100M. I increased it a couple of times, but the problem didn't go away.
Eventually, we simply removed the journal database and restarted the server and the repeated transfers stopped. At first I suspected that it somehow was losing track of how much space was being allocated, but that's a flimsy theory: I don't really have any hard evidence and these processes had run at a high load for months without trouble. On reflection, hitting the max-journal-usage limit seems more likely. Given that:
1. Are the messages above indeed evidence of hitting the max-journal-usage limit?
2. Is there a way to see the space occupancy of each zone in the journal, so we might tune the threshold for individual zones?
On the odd chance that there is a bug in this area: we are using a slightly older dev variant: a branch off 2.5.0-dev that has some non-standard, minimal EDNS0 client-subnet support we were interested in. The branch is: https://github.com/CZ-NIC/knot/tree/ecs-patch.
Thanks,
Chuck
Dobrý den,
omlouvám se jestli se obracím na špatnou adresu, tuto jsem dostal telefonátem na vaši zákaznickou linku.
Měl bych zájem o odbornou instalaci a konfiguraci vašich DNS serverů v naší společnosti.
Jsme společnost, která poskytuje internetové služby a jednalo by se o interní DNS resolvery pro naše zákazníky
na našem (pravděpodobně virtualizovaném) hardwaru.
Jednalo by se o placenou jednorázovou instalaci a prvotní nastavení - možná včetně nějaké konzultace.
(bohužel si však nemohu dovolit Váš support).
Potřeboval bych tedy vědět, zdali poskytujete podobné služby (případně na koho se mohu obrátit) a alespoň
orientační odhad ceny výše uvedeného zásahu.
Předem děkuji za Váš čas a zpětnou vazbu.
S pozdravem
Milan Černý
vedoucí informačních systémů
Planet A, a.s.
provozovatel sítě AIM
kanceláře společnosti:
U Hellady 4
140 00 Praha 4 - Michle
www.a1m.cz<http://www.a1m.cz/>
+420 246 089 203 tel
+420 246 089 210 fax
+420 603 293 561 gsm
milan.cerny(a)a1m.cz<mailto:milan.cerny@a1m.cz>
Hello,
I am a bit news with Knot DNS 2.5.3 and also DNSSEC as well.
I am succesfully migrated from a non dnssec bind to knot dns, this is working like a charm.
Now I would like to migrate one zone into DNSSEC.
Everything is working *except* getting keys (ZSK/KSK) to publish to Gandi.
I got this tutorial (french sorry) : https://www.swordarmor.fr/gestion-automatique-de-dnssec-avec-knot.html that seems to works BUT the publication of the DNSKEY has chenged, and I didn't find a nice way to get it.
Have any ideas ?
Regards,
Xavier
Dear Knot Resolver users,
Knot Resolver 1.3.2 maintenance update has been released:
Security
--------
- fix possible opportunities to use insecure data from cache as keys
for validation
Bugfixes
--------
- daemon: check existence of config file even if rundir isn't specified
- policy.FORWARD and STUB: use RTT tracking to choose servers (#125, #208)
- dns64: fix CNAME problems (#203) It still won't work with policy.STUB.
- hints: better interpretation of hosts-like files (#204)
also, error out if a bad entry is encountered in the file
- dnssec: handle unknown DNSKEY/DS algorithms (#210)
- predict: fix the module, broken since 1.2.0 (#154)
Improvements
------------
- embedded LMDB fallback: update 0.9.18 -> 0.9.21
It is recommended to update from 1.3.x, and it's strongly recommended to
update from older versions, as older branches are no longer supported.
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.3.2/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/v1.3.2/
--Vladimir
Hello Knot developers,
Suppose I am running two Knot DNS instances. They're listening on
different interfaces, and slaving different sets of zones. If the
"storage" variable is the same for these two, then the two instances of
knotd will both try to write into storage/journal and storage/timers.
Is this safe to do? My understanding of LMDB is that a database can be
shared between different threads and processes because they using locking.
Regards,
Anand
Hello Knot DNS developers,
I have an observation about newer versions of Knot which use the new
single LMDB-based journal.
Suppose I have 3 slave zones configured in my knot.conf. Let's call them
zone1, zone2 and zone3. Knot loads the zones from the master and writes
data into the journal. Now suppose I remove one zone (zone3) from the
config, and reload knot. The zone is no longer configured, and querying
knot for it returns a REFUSED response. So far all is according to my
expectation.
However, if I run "kjournalprint <path-to-journal> -z", I get:
zone1.
zone2.
zone3.
So the zone is no longer configured, but its data persists in the
journal. If I run "knotc -f zone-purge zone3." I get:
error: [zone3.] (no such zone found)
I'm told that I should have done the purge first, *before* remove the
zone from the configuration. However, I find this problematic for 2 reasons:
1. I have to remember to do this, and I'm not used to this modus
operandi; and
2. this is impossible to do on a slave that is configured automatically
from template files. On our slave servers, where we have around 5000
zones, the zones are configured by templating out a knot.conf. Adding
zones is fine, but if a zone is being deleted, it will just disappear
from knot.conf. We keep no state, and so I don't know which zone is
being removed, and cannot purge it before-hand.
Now, the same is kind of true for Knot < 2.4. But... there is one major
difference. Under older versions of Knot, zone data was written into
individual files, and journals were written into individual .db files. I
can run a job periodically that compares zones in knot.conf with files
on disk, and delete those files that have no matching zones in the
config. This keeps the /var/lib/knot directory clean.
But in newer versions of Knot, there is no way to purge the journal of
zone data once a zone is removed from the configuration. For an operator
like me, this is a problem. I would like "knotc zone-purge" to be able
to operate on zones that are no longer configured, and remove stale data
anyway.
Hi Knot folks,
I just tried to view https://www.knot-dns.cz/ and it gave me an HTTP 404
error. After trying to reload it twice, I got the front page, but the
other parts of the site (documentation, download, etc) are all still
giving me HTTP 404 errors.
Regards,
Anand
Dear Knot DNS and Knot Resolver users,
in order to unite the git namespaces and make it more logical we
have move the repositories of Knot DNS and Knot Resolver under the
knot namespace.
The new repositories are located at:
Knot DNS
https://gitlab.labs.nic.cz/knot/knot-dns
Knot Resolver:
https://gitlab.labs.nic.cz/knot/knot-resolver
The old git:// urls were kept same:
git://gitlabs.labs.nic.cz/knot-dns.git
git://gitlabs.labs.nic.cz/knot-resolver.git
Sorry for any inconvenience it might have caused.
Cheers,
--
Ondřej Surý -- Technical Fellow
--------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Milesovska 5, 130 00 Praha 3, Czech Republic
mailto:ondrej.sury@nic.cz https://nic.cz/
--------------------------------------------
Am 09.07.2017 um 12:30 schrieb Christoph Lukas:
> Hello list,
>
> I'm running knot 2.5.2 on FreeBSD.
> In an attempt to resolve a recent semantic error in one of my zonefiles,
> the $storage/$zone.db (/var/db/knot/firc.de.db) file got lost.
> Meaning: I accidentally deleted it without a backup.
> At point of deletion, the .db file was 1.6 MB in size.
> The actual zone file was kept, the journal and DNSSEC keys untouched,
> the zone still functions without any issues.
>
> The zone is configured as such in knot.conf:
> zone:
> - domain: firc.de
> file: "/usr/local/etc/knot/zones/firc.de"
> notify: inwx
> acl: acl_inwx
> dnssec-signing: on
> dnssec-policy: rsa
>
>
> This raises the following questions:
>
> 1) What is actually in those .db files?
> 2) Are these any adverse effects to be expected now that I don't have
> the file / need to re-create it?
> 3) How can I re-create the file?
>
> Any answers will be greatly appreciated.
>
> With kind regards,
> Christoph Lukas
>
As answered in
https://lists.nic.cz/pipermail/knot-dns-users/2017-July/001160.html
those .db files are not required anymore.
I should have read the archive first ;)
With kind regards,
Christoph Lukas
Hi,
I am running Knot 2.5.2-1 on a Debian Jessie, all is good, no worries.
I am very pleased with Knot's simplicity and ease of configuration -
which are still readable as well!
I noticed recently that I am getting
knotd[9957]: notice: [$DOMAIN.] journal, obsolete exists, file '/var/lib/knot/zones/$DOMAIN.db'
everytime I restart Knot. I get these for all my domains I have confgured,
and there is one in particular providing my own .dyn. service :-) - so I
am a bit reluctant - just to delete it.
But all the .db files have a fairly old timestamp (Feb 2017) and about the
same. At that time (Feb 2017) I was running just one authoritative Master
instance, nothing fancy. lsof also doesn't report any open files. At
that time (Feb 2017) I was running just one authoritative Master
instance, nothing else.
Can I just delete those files?
Cheers
Thomas
Hello knot,
I have recently started a long over due migration to knot 2.* and I have noticed that the server.workers config stanza is now split into three separate stanzas [server.tcp-workers, server.udp-workers & server.background-workers]. Although this is great for flexibility it does make automation a little bit more difficult. With the 1.6 configuration I could easily say something like the following
workers = $server_cpu_count - 2
This meant I would always have 2 cpu cores available for other processes e.g. doc, tcpdump. With the new configuration I would need to do something like the following
$avalible_workers = $server_cpu_count - 2
$udp_workers = $avalible_workers * 0.6
$tcp_workers = $avalible_workers * 0.3
$background_workers = $avalible_workers * 0.1
The above code is lacking error detection and rounding corrections which will add further complexity and potentially lacking itelagence that is available in knot to better balance resources. As you have already implemented logic in knot to ensure cpus are correctly balanced I wonder if you could add back a workers configurations to act as the upper bound used in the *-workers configuration. Such that
*-workes defaults:
"Default: auto-estimated optimal value based on the number of online CPUs or the value set by `workers` which ever is lower)
Thanks
John
Hi,
I just upgraded my Knot DNS to the newest PPA release 2.5.1-3, after
which the server process refuses to start. Relevant syslog messages:
Jun 15 11:19:41 vertigo knotd[745]: error: module, invalid directory
'/usr/lib/x86_64-linux-gnu/knot'
Jun 15 11:19:41 vertigo knotd[745]: 2017-06-15T11:19:41 error: module,
invalid directory '/usr/lib/x86_64-linux-gnu/knot'
Jun 15 11:19:41 vertigo knotd[745]: critical: failed to open
configuration database '' (invalid parameter)
Jun 15 11:19:41 vertigo knotd[745]: 2017-06-15T11:19:41 critical: failed
to open configuration database '' (invalid parameter)
Could this have something to do with the following change:
knot (2.5.1-3) unstable; urgency=medium
.
* Enable dnstap module and set default moduledir to multiarch path
Antti
Hi there,
I'm having some issues configuring dnstap. I'm using Knot version 2.5.1,
installed via the `knot` package on Debian 3.16.43-2. As per this
documentation
<https://www.knot-dns.cz/docs/2.5/html/modules.html#dnstap-dnstap-traffic-lo…>,
I've added the following lines to my config file:
mod-dnstap:
- id: capture_all
sink: "/etc/knot/capture"
template:
- id: default
global-module: mod-dnstap/capture_all
But when starting knot (e.g. by `sudo knotc conf-begin`), I get the message:
error: config, file 'etc/knot/knot.conf', line 20, item 'mod-dnstap', value
'' (invalid item)
error: failed to load configuration file '/etc/knot/knot.conf' (invalid
item)
I also have the same setup on an Ubuntu 16.04.1 running Knot version
2.4.0-dev, and it works fine.
Any idea what might be causing the issue here? Did the syntax for
mod-dnstap change or something? Should I have installed from source? I do
remember there being some special option you needed to compile a dependency
with to use dnstap when I did this the first time, but I couldn't find
documentation for it when I looked for it.
Thanks!
-Sarah
Hi,
after upgrade to 2.5.1 the output of knotc zone-status shows strange
timestamps for refresh and expire:
[example.net.] role: slave | serial: 1497359235 | transaction: none |
freeze: no | refresh: in 415936h7m15s | update: not scheduled |
expiration: in 416101h7m15s | journal flush: not scheduled | notify: not
scheduled | DNSSEC resign: not scheduled | NSEC3 resalt: not scheduled |
parent DS query: not schedule
However the zone is refreshed within correct interval, so it seems its
just a display issue. Is this something specific to our setup?
Regards
André
Dear Knot Resolver users,
CZ.NIC is proud to announce the release of Knot Resolver 1.3.0.
The biggest feature of this release is the support for DNSSEC Validation
in the forwarding mode, the feature many people were eagerly awaiting for.
We have also squeezed refactoring of AD flag handling and several other
bugfixes. The 1.3.0 is currently the recommended release to run at your
recursive nameservers.
Here's the 1.3.0 changelog:
Security
--------
- Refactor handling of AD flag and security status of resource records.
In some cases it was possible for secure domains to get cached as
insecure, even for a TLD, leading to disabled validation.
It also fixes answering with non-authoritative data about nameservers.
Improvements
------------
- major feature: support for forwarding with validation (#112).
The old policy.FORWARD action now does that; the previous non-validating
mode is still avaliable as policy.STUB except that also uses caching (#122).
- command line: specify ports via @ but still support # for compatibility
- policy: recognize 100.64.0.0/10 as local addresses
- layer/iterate: *do* retry repeatedly if REFUSED, as we can't yet easily
retry with other NSs while avoiding retrying with those who REFUSED
- modules: allow changing the directory where modules are found,
and do not search the default library path anymore.
Bugfixes
--------
- validate: fix insufficient caching for some cases (relatively rare)
- avoid putting "duplicate" record-sets into the answer (#198)
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.3.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.0.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/latest/
Cheers,
--
Ondřej Surý -- Technical Fellow
--------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Milesovska 5, 130 00 Praha 3, Czech Republic
mailto:ondrej.sury@nic.cz https://nic.cz/
--------------------------------------------
Hi,
we updated knot from 2.4.3 to 2.5.1 and the include statement does not
seem to work anymore:
error: config, file '/etc/knot/zones.conf', line 5, item 'domain', value
'example.net' (duplicate identifier)
error: config, file '/etc/knot/knot.conf', line 73, include ''
(duplicate identifier)
error: failed to load configuration file '/etc/knot/knot.conf'
(duplicate identifier)
cat > /etc/knot/knot.conf << 'EOF'
# THIS CONFIGURATION IS MANAGED BY PUPPET
# see man 5 knot.conf for all available configuration options
server:
user: knot:knot
listen: ["0.0.0.0@53", "::@53"]
version:
log:
- target: syslog
any: info
key:
- id: default
algorithm: hmac-sha512
secret:
pLEG3Z6uvMtKiQsmOp4tMDyyxENLyJGx8kIbud24tfHdY0uRO82Qix8D2opoA/rndcd2fdt9Ba1LhHDefCK1VQ==
remote:
- id: ns1
address: ["xxxx1", "yyyy1"]
key: default
- id: ns2
address: ["xxxx2", "yyyy2"]
key: default
- id: ns3
address: ["xxxx3", "yyyy3"]
key: default
acl:
- id: notify_from_master
action: notify
address: ["xxxx1", "yyyy1"]
key: default
- id: transfer_to_slaves
action: transfer
address: ["xxxx2", "xxxx2", "xxxx3", "yyyy3"]
key: default
policy:
- id: default_rsa
algorithm: RSASHA256
ksk-size: 2048
zsk-size: 1024
template:
- id: default
file: /var/lib/knot/zones/%s.zone
kasp-db: /var/lib/knot/kasp
storage: /var/lib/knot
- id: master_default
acl: ["transfer_to_slaves"]
file: /var/lib/knot/zones/%s.zone
ixfr-from-differences: on
notify: ["ns2", "ns3"]
serial-policy: unixtime
storage: /var/lib/knot
- id: master_dnssec
acl: ["transfer_to_slaves"]
dnssec-policy: default_rsa
dnssec-signing: on
file: /var/lib/knot/zones/%s.zone
notify: ["ns2", "ns3"]
storage: /var/lib/knot
zonefile-sync: -1
- id: slave
acl: ["notify_from_master"]
master: ns1
serial-policy: unixtime
storage: /var/lib/knot
include: "/etc/knot/zones.conf"
EOF
cat > /etc/knot/zones.conf << 'EOF'
# THIS CONFIGURATION IS MANAGED BY PUPPET
# see man 5 knot.conf for all available configuration options
zone:
- domain: example.net
template: slave
- domain: example.com
template: slave
- domain: example.org
template: slave
EOF
If I add the content from zones.conf into knot.conf it works. It seems
like the included file gets parsed twice, when I add a domain twice, it
will fail at the line with the duplicate zone. If there are no duplicate
domains in the file, it always fails at the first domain found.
Is this a bug or something with our setup?
Regards
André
Dear Knot DNS users,
CZ.NIC has released Knot DNS 2.5.1 that fixes issues that some users might
experience when upgrading existing DNSSEC enabled installations of Knot DNS.
Knot DNS 2.5.1 (2017-06-07)
===========================
Bugfixes:
---------
- pykeymgr no longer crash on empty json files in the KASP DB directory
- pykeymgr no longer imports keys in the "removed" state
- Imported keys in the "removed" state no longer makes knotd to crash
- Including an empty configuration directory no longer makes knotd to crash
- pykeymgr is distributed and installed to the distribution tarball
Thank you for using Knot DNS. Feel free to write us, fill an issue or
just say thank you if you are happy with Knot DNS.
Full changelog:
https://gitlab.labs.nic.cz/labs/knot/raw/v2.5.1/NEWS
Sources:
https://secure.nic.cz/files/knot-dns/knot-2.5.1.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-dns/knot-2.5.1.tar.xz.asc
Documentation:
https://www.knot-dns.cz/docs/2.x/html/
Cheers,
--
Ondřej Surý -- Technical Fellow
--------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Milesovska 5, 130 00 Praha 3, Czech Republic
mailto:ondrej.sury@nic.cz https://nic.cz/
--------------------------------------------