Dear Knot DNS and Knot Resolver users,
in order to unite the git namespaces and make it more logical we
have move the repositories of Knot DNS and Knot Resolver under the
knot namespace.
The new repositories are located at:
Knot DNS
https://gitlab.labs.nic.cz/knot/knot-dns
Knot Resolver:
https://gitlab.labs.nic.cz/knot/knot-resolver
The old git:// urls were kept same:
git://gitlabs.labs.nic.cz/knot-dns.git
git://gitlabs.labs.nic.cz/knot-resolver.git
Sorry for any inconvenience it might have caused.
Cheers,
--
Ondřej Surý -- Technical Fellow
--------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Milesovska 5, 130 00 Praha 3, Czech Republic
mailto:ondrej.sury@nic.cz https://nic.cz/
--------------------------------------------
Am 09.07.2017 um 12:30 schrieb Christoph Lukas:
> Hello list,
>
> I'm running knot 2.5.2 on FreeBSD.
> In an attempt to resolve a recent semantic error in one of my zonefiles,
> the $storage/$zone.db (/var/db/knot/firc.de.db) file got lost.
> Meaning: I accidentally deleted it without a backup.
> At point of deletion, the .db file was 1.6 MB in size.
> The actual zone file was kept, the journal and DNSSEC keys untouched,
> the zone still functions without any issues.
>
> The zone is configured as such in knot.conf:
> zone:
> - domain: firc.de
> file: "/usr/local/etc/knot/zones/firc.de"
> notify: inwx
> acl: acl_inwx
> dnssec-signing: on
> dnssec-policy: rsa
>
>
> This raises the following questions:
>
> 1) What is actually in those .db files?
> 2) Are these any adverse effects to be expected now that I don't have
> the file / need to re-create it?
> 3) How can I re-create the file?
>
> Any answers will be greatly appreciated.
>
> With kind regards,
> Christoph Lukas
>
As answered in
https://lists.nic.cz/pipermail/knot-dns-users/2017-July/001160.html
those .db files are not required anymore.
I should have read the archive first ;)
With kind regards,
Christoph Lukas
Hi,
I am running Knot 2.5.2-1 on a Debian Jessie, all is good, no worries.
I am very pleased with Knot's simplicity and ease of configuration -
which are still readable as well!
I noticed recently that I am getting
knotd[9957]: notice: [$DOMAIN.] journal, obsolete exists, file '/var/lib/knot/zones/$DOMAIN.db'
everytime I restart Knot. I get these for all my domains I have confgured,
and there is one in particular providing my own .dyn. service :-) - so I
am a bit reluctant - just to delete it.
But all the .db files have a fairly old timestamp (Feb 2017) and about the
same. At that time (Feb 2017) I was running just one authoritative Master
instance, nothing fancy. lsof also doesn't report any open files. At
that time (Feb 2017) I was running just one authoritative Master
instance, nothing else.
Can I just delete those files?
Cheers
Thomas
Hello knot,
I have recently started a long over due migration to knot 2.* and I have noticed that the server.workers config stanza is now split into three separate stanzas [server.tcp-workers, server.udp-workers & server.background-workers]. Although this is great for flexibility it does make automation a little bit more difficult. With the 1.6 configuration I could easily say something like the following
workers = $server_cpu_count - 2
This meant I would always have 2 cpu cores available for other processes e.g. doc, tcpdump. With the new configuration I would need to do something like the following
$avalible_workers = $server_cpu_count - 2
$udp_workers = $avalible_workers * 0.6
$tcp_workers = $avalible_workers * 0.3
$background_workers = $avalible_workers * 0.1
The above code is lacking error detection and rounding corrections which will add further complexity and potentially lacking itelagence that is available in knot to better balance resources. As you have already implemented logic in knot to ensure cpus are correctly balanced I wonder if you could add back a workers configurations to act as the upper bound used in the *-workers configuration. Such that
*-workes defaults:
"Default: auto-estimated optimal value based on the number of online CPUs or the value set by `workers` which ever is lower)
Thanks
John
Hi,
I just upgraded my Knot DNS to the newest PPA release 2.5.1-3, after
which the server process refuses to start. Relevant syslog messages:
Jun 15 11:19:41 vertigo knotd[745]: error: module, invalid directory
'/usr/lib/x86_64-linux-gnu/knot'
Jun 15 11:19:41 vertigo knotd[745]: 2017-06-15T11:19:41 error: module,
invalid directory '/usr/lib/x86_64-linux-gnu/knot'
Jun 15 11:19:41 vertigo knotd[745]: critical: failed to open
configuration database '' (invalid parameter)
Jun 15 11:19:41 vertigo knotd[745]: 2017-06-15T11:19:41 critical: failed
to open configuration database '' (invalid parameter)
Could this have something to do with the following change:
knot (2.5.1-3) unstable; urgency=medium
.
* Enable dnstap module and set default moduledir to multiarch path
Antti
Hi there,
I'm having some issues configuring dnstap. I'm using Knot version 2.5.1,
installed via the `knot` package on Debian 3.16.43-2. As per this
documentation
<https://www.knot-dns.cz/docs/2.5/html/modules.html#dnstap-dnstap-traffic-lo…>,
I've added the following lines to my config file:
mod-dnstap:
- id: capture_all
sink: "/etc/knot/capture"
template:
- id: default
global-module: mod-dnstap/capture_all
But when starting knot (e.g. by `sudo knotc conf-begin`), I get the message:
error: config, file 'etc/knot/knot.conf', line 20, item 'mod-dnstap', value
'' (invalid item)
error: failed to load configuration file '/etc/knot/knot.conf' (invalid
item)
I also have the same setup on an Ubuntu 16.04.1 running Knot version
2.4.0-dev, and it works fine.
Any idea what might be causing the issue here? Did the syntax for
mod-dnstap change or something? Should I have installed from source? I do
remember there being some special option you needed to compile a dependency
with to use dnstap when I did this the first time, but I couldn't find
documentation for it when I looked for it.
Thanks!
-Sarah
Hi,
after upgrade to 2.5.1 the output of knotc zone-status shows strange
timestamps for refresh and expire:
[example.net.] role: slave | serial: 1497359235 | transaction: none |
freeze: no | refresh: in 415936h7m15s | update: not scheduled |
expiration: in 416101h7m15s | journal flush: not scheduled | notify: not
scheduled | DNSSEC resign: not scheduled | NSEC3 resalt: not scheduled |
parent DS query: not schedule
However the zone is refreshed within correct interval, so it seems its
just a display issue. Is this something specific to our setup?
Regards
André