Hi knot-dns-users,
I'm not an expert when it comes to auth servers, but I speculate that
seemingly larger memory consumption might come from journal. Journal is
in our case stored in LMDB, which is in fact memory-mapped file, so it
looks like it consumes memory ...
IMHO a journal database with bunch of changesets could cause this, but
it is not actually relevant because kernel can drop journal pages from
RAM whenever it needs more memory, so the RAM consumption here is just
"cache" for the journal and nothing else.
For further details please see
https://symas.com/understanding-lmdb-database-file-sizes-and-memory-utiliza…
For comparison, BIND's journals will not add anything to the memory
utilization numbers because its journal file is not memory mapped.
LMDB/Knot DNS simply defers a lot of memory management stuff to kernel,
which is after all designed to deal with memory management and better
optimized that most of other code. In any case, this should not be
anything to worry about *unless it is causing a real problem*.
Of course, if it is causing real problem then we have to investigate,
there might be a lingering problem somewhere ...
Petr Špaček @ CZ.NIC
On 16.3.2018 10:55, Daniel Salzman wrote:
Aleš,
On 03/16/2018 10:32 AM, Aleš Rygl wrote:
Hi Daniel, hi all.
I completely agree that VmSize is not the right metric and I was not mentioning it in my
original post :-) I'd rather point out that Knot is in my case consuming nearly 2Gig
of RAM while serving just 141 zones with 1MB of records while Bind serving 1560 zones (10%
on DNSSEC) something about 550MB =-O
root@eira:~# free -h
total used free shared buff/cache available
Mem: 2.0G 1.7G 122M 5.4M 126M 109M
Swap: 0B 0B 0B
root@eira:~#
root@eira:~# systemctl stop knot.service
root@eira:~#
root@eira:~# free -h
total used free shared buff/cache available
Mem: 2.0G 54M 1.8G 5.4M 126M 1.8G
Swap: 0B 0B 0B
root@eira:~#
Is it something what I have to count on when using Knot? I am just surprised by such
memory requirements and your measurement shows that it is like that. I will have to add
some RAM...
Probably I don't understand. Knot consumed less physical memory than Bind during my
test :-)
To be honest, I don't understand why Knot is consuming such amount of memory in your
case!
Could you please try disabling the statistics module? Statistics per zone with all
metrics consume
some memory.
Daniel
>
> Thanks for memory saving tips.
>
> BR
> Ales
>
>
>
> On 15.3.2018 21:16, daniel.salzman(a)nic.cz wrote:
>> Hi Aleš,
>>
>> I would agree with Vladimír. Virtual memory consumption is not
>> important. We should focus on physical memory consuption (VmRSS).
>> But in your case, these values are almost equal. It's very strange!
>>
>> I did a quick measurement:
>>
>> TLD with DNSSEC | 10k zones with DNSSEC
>> VIRT RES | VIRT RES
>> --------------------------------+-----------------
>> Bind 9.12.1 656M 419M | 1269M 152M
>> Knot 2.6.5 4384M 360M | 4588M 39440
>> NSD 4.1.20 124M 87216 | 147M 108M
>> 1759M 1732M | 115M 89124
>>
>> Note1: I don't know how NSD work, so I printed 2 top level processes.
>> Note2: Bind was compiled with --enable-threads.
>>
>> The virtual memory consumption by Knot can be significantly reduced by
>> setting max-timer-db-size, max-journal-db-size, max-kasp-db-size, and
>> --with-conf-mapsize configure option (if compiled from the source code).
>>
>> Best,
>> Daniel
>>
>> On 2018-03-15 13:37, Aleš Rygl wrote:
>>> Dear all,
>>>
>>> While trying to migrate or DNS to Knot I have noticed that a slave
>>> server with 2GB RAM is facing memory exhaustion. I am running
>>> 2.6.5-1+0~20180216080324.14+stretch~1.gbp257446. There is 141 zones
>>> having around 1MB in total. Knot is acting as pure slave server with
>>> minimal configuration.
>>>
>>> There is nearly 1.7GB of memory consumed by Knot on a freshly rebooted
server:
>>>
>>> root@eira:/proc/397# cat status
>>> Name: knotd
>>> Umask: 0007
>>> State: S (sleeping)
>>> Tgid: 397
>>> Ngid: 0
>>> Pid: 397
>>> PPid: 1
>>> TracerPid: 0
>>> Uid: 108 108 108 108
>>> Gid: 112 112 112 112
>>> FDSize: 64
>>> Groups: 112
>>> NStgid: 397
>>> NSpid: 397
>>> NSpgid: 397
>>> NSsid: 397
>>> VmPeak: 24817520 kB
>>> VmSize: 24687160 kB
>>> VmLck: 0 kB
>>> VmPin: 0 kB
>>> VmHWM: 1743400 kB
>>> VmRSS: 1743272 kB
>>> RssAnon: 1737088 kB
>>> RssFile: 6184 kB
>>> RssShmem: 0 kB
>>> VmData: 1781668 kB
>>> VmStk: 132 kB
>>> VmExe: 516 kB
>>> VmLib: 11488 kB
>>> VmPTE: 3708 kB
>>> VmPMD: 32 kB
>>> VmSwap: 0 kB
>>> HugetlbPages: 0 kB
>>> Threads: 21
>>> SigQ: 0/7929
>>> SigPnd: 0000000000000000
>>> ShdPnd: 0000000000000000
>>> SigBlk: fffffffe7bfbbefc
>>> SigIgn: 0000000000000000
>>> SigCgt: 0000000180007003
>>> CapInh: 0000000000000000
>>> CapPrm: 0000000000000000
>>> CapEff: 0000000000000000
>>> CapBnd: 0000003fffffffff
>>> CapAmb: 0000000000000000
>>> Seccomp: 0
>>> Cpus_allowed: f
>>> Cpus_allowed_list: 0-3
>>> Mems_allowed: 00000000,00000001
>>> Mems_allowed_list: 0
>>> voluntary_ctxt_switches: 260
>>> nonvoluntary_ctxt_switches: 316
>>> root@eira:/proc/397#
>>>
>>> Config:
>>>
>>> server:
>>> listen: 0.0.0.0@53
>>> listen: ::@53
>>> user: knot:knot
>>>
>>> log:
>>> - target: syslog
>>> any: info
>>> mod-rrl:
>>> - id: rrl-10
>>> rate-limit: 10 # Allow 200 resp/s for each flow
>>> slip: 2 # Every other response slips
>>>
>>> mod-stats:
>>> - id: custom
>>> edns-presence: on
>>> query-type: on
>>> request-protocol: on
>>> server-operation: on
>>> request-bytes: on
>>> response-bytes: on
>>> edns-presence: on
>>> flag-presence: on
>>> response-code: on
>>> reply-nodata: on
>>> query-type: on
>>> query-size: on
>>> reply-size: on
>>>
>>>
>>> template:
>>> - id: default
>>> storage: "/var/lib/knot"
>>> module: mod-rrl/rrl-10
>>> module: mod-stats/custom
>>> acl: [allowed_transfer]
>>> disable-any: on
>>> master: idunn
>>>
>>> I was pretty sure that a VM with 2GB RAM is enough for my setup :-)
>>>
>>> BR
>>>
>>> Ales