Zdravim,
mozno sa to uz tu riesilo.
Planujete novu verziu 1.3.1 propagovat aj do FreeBSD portov ? Aktualne
je tam verzia 1.3.0 rc1.
Samozrejme je moznost si to skompilovat rucne, ale preferoval by som
porty ak mate v plane ich udrzovat :)
--
Robert
I was just giving kdig and khost a spin, when I noticed some very long output for a simple query with khost. Looks like the aliases is expanded multiple times:
erwin@panda:/home/erwin % khost www.droso.dkwww.droso.dk. is an alias for koala.droso.dk.
koala.droso.dk. has IPv4 address 213.239.220.246
www.droso.dk. is an alias for koala.droso.dk.
koala.droso.dk. has IPv6 address 2a01:4f8:a0:7163::2
www.droso.dk. is an alias for koala.droso.dk.
erwin@panda:/home/erwin % host www.droso.dkwww.droso.dk is an alias for koala.droso.dk.
koala.droso.dk has address 213.239.220.246
koala.droso.dk has IPv6 address 2a01:4f8:a0:7163::2
I would say once is enough :-)
Cheers,
Erwin
--
Med venlig hilsen/Best Regards
Erwin Lansing
Network and System Administrator
DK Hostmaster A/S
Kalvebod Brygge 45, 3. sal
1560 København V
Tlf. 33 64 60 60
Fax.: 33 64 60 66
Email: erwin(a)dk-hostmaster.dk
Homepage: http://www.dk-hostmaster.dk
.dk Danmarks plads på Internettet
-------------------------------------------------------------------------
This is an email from DK Hostmaster A/S. This message may contain
confidential information and is intended solely for the use of the
intended addressee. If you are not the intended addressee please notify
the sender immediately and delete this e-mail from your system. You are
not permitted to disclose, distribute or copy the information in this
e-mail.
--------------------------------------------------------------------------
Hello Knot developers,
I'm trying out Knot 1.3.0 final, and testing the new options for
system.identity, system.version and system.nsid.
At first, I did this:
system {
identity yes;
version yes;
nsid yes;
}
The alert ones will note that I didn't use "on", but accidentally used
"yes", so Knot parsed them all as strings, and gave me unexpected but
correct results.
; <<>> DiG 9.9.3-P2 <<>> +norec +nsid @193.0.0.198 ch txt id.server
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15951
;; flags: qr; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; NSID: 79 65 73 (y) (e) (s)
;; QUESTION SECTION:
;id.server. CH TXT
;; ANSWER SECTION:
id.server. 0 CH TXT "yes"
; <<>> DiG 9.9.3-P2 <<>> +norec +nsid @193.0.0.198 ch txt version.server
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56914
;; flags: qr; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; NSID: 79 65 73 (y) (e) (s)
;; QUESTION SECTION:
;version.server. CH TXT
;; ANSWER SECTION:
version.server. 0 CH TXT "yes"
Note that the NSID value is also "yes".
So I realised my mistake, and changes the values from "yes" to "on", and
HUPped the server. Now I get:
;; Warning: Message parser reports malformed message packet.
; <<>> DiG 9.9.3-P2 <<>> +norec +nsid @193.0.0.198 ch txt id.server
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27835
;; flags: qr; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: Messages has 7 extra bytes at end
;; QUESTION SECTION:
;id.server. CH TXT
;; ANSWER SECTION:
id.server. 0 CH TXT "admin.authdns.ripe.net"
;; Warning: Message parser reports malformed message packet.
; <<>> DiG 9.9.3-P2 <<>> +norec +nsid @193.0.0.198 ch txt version.server
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60856
;; flags: qr; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: Messages has 7 extra bytes at end
;; QUESTION SECTION:
;version.server. CH TXT
;; ANSWER SECTION:
version.server. 0 CH TXT "Knot DNS 1.3.0"
Note the warnings from dig about the extra bytes at the end. It seems
that if you change the value of NSID and reconfigure the server, it does
not appear to pick up the new value correctly. Stopping Knot completely
and starting it fixes it, but there appears to be a bug during
reconfiguration.
Hi Everyone,
as promised last week, I am proud to announce the 1.3.0 final is out!
It's been a long release cycle since the last final release, but it
brought lots and lots of bugfixes and a slew of new features.
Let me reiterate briefly what's new since the 1.2.0 - one of the most
visible features is the new zone file parser,
which eliminated the whole zone compilation process and sped up both
startup and preparation.
There's also a magical configure option --enable-fastparser which
makes it even faster (about 2x), very close to loading a binary zone.
We also brought our own alternative to DNS utilities like dig, host
and nsupdate which aim to be compatible with the ISC counterparts,
but also bring some nimble enhancements like pretty comments and output for dig.
No smaller are changes to the configuration. Features like groups of
remotes, include in config, UNIX sockets for remote control, new knotc
commands and general build scripts overhaul that make it nicer for the
package maintainers and users.
There also was a major refactoring effort under the bonnet (and more
to come), which shows in a lower memory consumption, maintainability
and trim code base. For many many more, check our web pages or have a
look at the NEWS file for an exhaustive list of changes and bugfixes.
Back on the ground, we fixed several bugs since rc5 last week. Namely
answering from names at or below insecure delegation points,
new defaults for CH TXT special zones, randomly disconnected transfers
and secondary groups not being initialized when dropping privileges.
Also the bootstrap retry timer is now progressive.
Many thanks for Anand Buddhdev, Jonathan Hoppe, Johan Ihren, Erwin
Lansing and many others who have sent constructive reports, ideas,
encouragements and actual code (How cool is that?).
As always, you can find the full changelog at:
https://gitlab.labs.nic.cz/knot/blob/v1.3.0/NEWS
Sources:
https://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.gzhttps://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.bz2https://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.gz.aschttps://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.bz2.aschttps://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.xz.asc
Packages available at www.knot-dns.cz will be updated soon as well.
Cheers,
Marek
--
Marek Vavruša Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
WWW: http://labs.nic.czhttp://www.nic.cz
Hello Knot developers,
I'm testing 1.3.0-rc4, and have found something that looks like a bug.
I'm running knot using the CentOS upstart supervisor, and in the upstart
script, I have:
pre-stop exec knotc -c $CONF -w stop
This means that when I run "initctl stop knot", upstart will run "knotc
-c /etc/knot/knot.conf -w stop". The "-w" is supposed to make knotc wait
until the server has stopped.
However, in reality this is not happening. When the stop command is
given, Knot logs this:
2013-07-17T22:48:23 Stopping server...
2013-07-17T22:48:23 Server finished.
2013-07-17T22:48:23 Shut down.
And knotc returns *immediately*. However, if I examine the process
table, I see the knotd process still running. It takes knotd about 10
more seconds to actually exit, at 22:48:33. This is problematic for
upstart. Since knotc has returned, but the knotd process hasn't yet
died, upstart thinks that it has not responded to the stop request, and
so upstart uses the sledgehammer (kill -9) to stop the knotd process.
My assumption is that the knotd process is still doing housekeeping
stuff, so the KILL signal is not a good idea. By the looks of it, the
"-w" flag to knotc isn't doing what it's supposed to, ie. wait for the
server to exit. Could you please investigate this and fix it?
(As an aside, I can work around this in upstart by using the option
"kill timeout 60" which will make upstart wait at least 60 seconds
before trying a KILL signal, by which time knotd should have exited. But
this is just a work-around, not a solution).
Regards,
Anand Buddhdev
RIPE NCC
Hello,
it seems that knotd suffers from the same issue as described here:
http://lists.scusting.com/index.php?t=msg&th=244420
I have Debian 7.0 with
http://deb.knot-dns.cz/debian/dists/wheezy/main/binary-i386/net/knot_1.2.0-…
and this is in /var/log/syslog after reboot:
Jun 3 22:37:43 ns knot[2091]: Binding to interface 2xxx:xxxx:xxxx:xxxx::1
port 53.
Jun 3 22:37:43 ns knot[2091]: [error] Cannot bind to socket (errno 99).
Jun 3 22:37:43 ns knot[2091]: [error] Could not bind to UDP
interface 2xxx:xxxx:xxxx:xxxx::1 port 53.
I have a static IPv6 address configured in /etc/network/interfaces.
Restarting knot later binds to this IPv6 address without any problem - it
is only the first start which fails (during OS booting). What do you think
that is the proper way of making knotd reliably listen on a static IPv6
address? I would prefer if I could avoid restarting knotd.
Leos Bitto