Discussion:
[influxdb] "cannot allocate memory" at startup
v***@gmail.com
2016-01-09 10:43:01 UTC
Permalink
Hi,

I am running influxdb 0.9.6 on a Cubox i4 (ARM processor) on Arch Linux ARM. It is built on the device from AUR, so no prepackaged binaries. First few days everything went fine, but the process seems to have crashed. Now when I try to restart influxd, it quits with a memory allocation error:

2016/01/09 11:17:56 InfluxDB starting, version 0.9.6.1, branch 0.9.6, commit 6d3a8603cfdaf1a141779ed88b093dcc5c528e5e, built unknown
2016/01/09 11:17:56 Go version go1.5.2, GOMAXPROCS set to 4
2016/01/09 11:17:56 Using configuration at: /etc/influxdb.conf
[metastore] 2016/01/09 11:17:56 Using data dir: /mnt/ext1/influxdb/meta
[metastore] 2016/01/09 11:17:56 Skipping cluster join: already member of cluster: nodeId=1 raftEnabled=true peers=[localhost:8087]
[metastore] 2016/01/09 11:17:56 Node at localhost:8087 [Follower]
[metastore] 2016/01/09 11:17:58 Node at localhost:8087 [Leader]. peers=[localhost:8087]
[metastore] 2016/01/09 11:17:58 spun up monitoring for 1
[store] 2016/01/09 11:17:58 Using data dir: /mnt/ext1/influxdb/data
[wal] 2016/01/09 11:17:58 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:17:58 WAL writing to /mnt/ext1/influxdb/wal/kronos_dev/default/3
...(a lot of WAL writing entries appear here)...
[wal] 2016/01/09 11:18:01 WAL writing to /mnt/ext1/influxdb/wal/_internal/monitor/6
[wal] 2016/01/09 11:18:02 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:18:02 WAL writing to /mnt/ext1/influxdb/wal/_internal/monitor/9
[wal] 2016/01/09 11:18:17 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:18:17 WAL writing to /mnt/ext1/influxdb/wal/kronos/default/16
[monitor] 2016/01/09 11:18:17 shutting down monitor system
[handoff] 2016/01/09 11:18:17 shutting down hh service
[metastore] 2016/01/09 11:18:17 RPC listener accept error and closed: network connection closed
[metastore] 2016/01/09 11:18:17 exec listener accept error and closed: network connection closed
[subscriber] 2016/01/09 11:18:17 closed service
run: open server: open tsdb store: failed to open shard 4: new engine: cannot allocate memory

Curious thing is, there is memory available. Before starting influxd, "free -m" returns:
total used free shared buff/cache available
Mem: 2014 159 1733 0 121 1822
Swap: 2047 22 2025

After starting influxd, it quits when its memory consumption reaches about 500 MB, so there is still 1 GB RAM and 2 GB swap available. As far as I know, I have no limits on maximum memory usage ("ulimit -v" returns unlimited).

Any ideas? Is there some configuration option I am not aware of?
--
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups "InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to influxdb+***@googlegroups.com.
To post to this group, send email to ***@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/1191f4a4-bf86-4182-97fa-74e289011bec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
v***@gmail.com
2016-01-10 18:29:41 UTC
Permalink
engine=“tsm1”
under the `[data]` section in the InfluxDB configuration file.
Let me know if that helps!
Todd
Hi,
2016/01/09 11:17:56 InfluxDB starting, version 0.9.6.1, branch 0.9.6, commit 6d3a8603cfdaf1a141779ed88b093dcc5c528e5e, built unknown
2016/01/09 11:17:56 Go version go1.5.2, GOMAXPROCS set to 4
2016/01/09 11:17:56 Using configuration at: /etc/influxdb.conf
[metastore] 2016/01/09 11:17:56 Using data dir: /mnt/ext1/influxdb/meta
[metastore] 2016/01/09 11:17:56 Skipping cluster join: already member of cluster: nodeId=1 raftEnabled=true peers=[localhost:8087]
[metastore] 2016/01/09 11:17:56 Node at localhost:8087 [Follower]
[metastore] 2016/01/09 11:17:58 Node at localhost:8087 [Leader]. peers=[localhost:8087]
[metastore] 2016/01/09 11:17:58 spun up monitoring for 1
[store] 2016/01/09 11:17:58 Using data dir: /mnt/ext1/influxdb/data
[wal] 2016/01/09 11:17:58 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:17:58 WAL writing to /mnt/ext1/influxdb/wal/kronos_dev/default/3
...(a lot of WAL writing entries appear here)...
[wal] 2016/01/09 11:18:01 WAL writing to /mnt/ext1/influxdb/wal/_internal/monitor/6
[wal] 2016/01/09 11:18:02 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:18:02 WAL writing to /mnt/ext1/influxdb/wal/_internal/monitor/9
[wal] 2016/01/09 11:18:17 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:18:17 WAL writing to /mnt/ext1/influxdb/wal/kronos/default/16
[monitor] 2016/01/09 11:18:17 shutting down monitor system
[handoff] 2016/01/09 11:18:17 shutting down hh service
[metastore] 2016/01/09 11:18:17 RPC listener accept error and closed: network connection closed
[metastore] 2016/01/09 11:18:17 exec listener accept error and closed: network connection closed
[subscriber] 2016/01/09 11:18:17 closed service
run: open server: open tsdb store: failed to open shard 4: new engine: cannot allocate memory
              total        used        free      shared  buff/cache   available
Mem:           2014         159        1733           0         121        1822
Swap:          2047          22        2025
After starting influxd, it quits when its memory consumption reaches about 500 MB, so there is still 1 GB RAM and 2 GB swap available. As far as I know, I have no limits on maximum memory usage ("ulimit -v" returns unlimited).
Any ideas? Is there some configuration option I am not aware of?
--
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups "InfluxDB" group.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/1191f4a4-bf86-4182-97fa-74e289011bec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I switched the engine to tsm1 as you suggested (without deleting existing data or doing any other operation, I don't know if that is necessary?) but I still get the memory error.

Jeroen
--
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups "InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to influxdb+***@googlegroups.com.
To post to this group, send email to ***@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/10a5f9e0-d355-41fc-9df6-51b8333680f0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
v***@gmail.com
2016-01-26 09:55:10 UTC
Permalink
I completely wiped the DB, everything went fine since (using tsm1 engine) but now it crashes with a segmentation fault after running about one minute and having accepted new data coming in. I cannot see any additional information.
Could this simply be related to influx running on an ARM device with only 2 GB RAM ?
--
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups "InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to influxdb+***@googlegroups.com.
To post to this group, send email to ***@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/a9727207-95c1-4f30-afdc-b9b8d763fc67%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Blade Doyle
2016-03-19 16:55:07 UTC
Permalink
I have the same issue, also on ARM (rpi2B). If there is little or no data
it starts up fine. But as soon as some (10 MB maybe?) data exists, I get
"cannot allocate memory" on startup. I notice that influxdb is not using
much memory at the time, and there is abailable memory for it to use. I
tried setting "ulimit" without and difference. I looked briefly at the
config file and saw several memory related settings....but I don't know
which ones are the right ones to change, or what values to set them to.

So far my solution has also been to just delete all the data. I hope that
I can find something better than that.

My log follows:

2016/03/19 16:46:27 InfluxDB starting, version 0.10.0, branch unknown,
commit unknown, built unknown
2016/03/19 16:46:27 Go version go1.5.3, GOMAXPROCS set to 4
2016/03/19 16:46:27 Using configuration at: /etc/influxdb/influxdb.conf
[meta] 2016/03/19 16:46:27 Starting meta service
[meta] 2016/03/19 16:46:27 Listening on HTTP: 127.0.0.1:8091
[metastore] 2016/03/19 16:46:27 Using data dir: /root/.influxdb/meta
[metastore] 2016/03/19 16:46:27 Node at localhost:8088 [Follower]
[metastore] 2016/03/19 16:46:29 Node at localhost:8088 [Leader].
peers=[localhost:8088]
[meta] 2016/03/19 16:46:30 127.0.0.1 - - [19/Mar/2016:16:46:30 +0000] GET
/?index=0 HTTP/1.1 200 450 - Go-http-client/1.1
21716e22-edf2-11e5-8001-000000000000 6.113659ms
[store] 2016/03/19 16:46:30 Using data dir: /root/.influxdb/data
[tsm1wal] 2016/03/19 16:46:30 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/19 16:46:30 tsm1 WAL writing to
/root/.influxdb/wal/_internal/monitor/12
[filestore]2016/03/19 16:46:30
/root/.influxdb/data/_internal/monitor/12/000000002-000000002.tsm (#0)
opened in 471.976µs
[cacheloader] 2016/03/19 16:46:30 reading file
/root/.influxdb/wal/_internal/monitor/12/_00078.wal, size 0
[tsm1wal] 2016/03/19 16:46:30 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/19 16:46:30 tsm1 WAL writing to
/root/.influxdb/wal/_internal/monitor/13
[filestore]2016/03/19 16:46:30
/root/.influxdb/data/_internal/monitor/13/000000001-000000001.tsm (#0)
opened in 472.601µs
[cacheloader] 2016/03/19 16:46:30 reading file
/root/.influxdb/wal/_internal/monitor/13/_00077.wal, size 0
[tsm1wal] 2016/03/19 16:46:30 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/19 16:46:30 tsm1 WAL writing to
/root/.influxdb/wal/_internal/monitor/14
[filestore]2016/03/19 16:46:30
/root/.influxdb/data/_internal/monitor/14/000000001-000000001.tsm (#0)
opened in 449.685µs
[cacheloader] 2016/03/19 16:46:30 reading file
/root/.influxdb/wal/_internal/monitor/14/_00075.wal, size 0
[tsm1wal] 2016/03/19 16:46:30 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/19 16:46:30 tsm1 WAL writing to
/root/.influxdb/wal/_internal/monitor/15
[filestore]2016/03/19 16:46:30
/root/.influxdb/data/_internal/monitor/15/000000001-000000001.tsm (#0)
opened in 489.841µs
[cacheloader] 2016/03/19 16:46:30 reading file
/root/.influxdb/wal/_internal/monitor/15/_00076.wal, size 0
[tsm1wal] 2016/03/19 16:46:30 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/19 16:46:30 tsm1 WAL writing to
/root/.influxdb/wal/_internal/monitor/16
[cacheloader] 2016/03/19 16:46:30 reading file
/root/.influxdb/wal/_internal/monitor/16/_00001.wal, size 500048
[cacheloader] 2016/03/19 16:46:31 reading file
/root/.influxdb/wal/_internal/monitor/16/_00002.wal, size 2377712
[cacheloader] 2016/03/19 16:46:34 reading file
/root/.influxdb/wal/_internal/monitor/16/_00074.wal, size 0
[tsm1wal] 2016/03/19 16:46:34 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/19 16:46:34 tsm1 WAL writing to
/root/.influxdb/wal/_internal/monitor/3
[filestore]2016/03/19 16:46:34
/root/.influxdb/data/_internal/monitor/3/000000001-000000001.tsm (#0)
opened in 439.216µs
[cacheloader] 2016/03/19 16:46:34 reading file
/root/.influxdb/wal/_internal/monitor/3/_00080.wal, size 0
[tsm1wal] 2016/03/19 16:46:34 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/19 16:46:34 tsm1 WAL writing to
/root/.influxdb/wal/_internal/monitor/4
[filestore]2016/03/19 16:46:34
/root/.influxdb/data/_internal/monitor/4/000000001-000000001.tsm (#0)
opened in 484.945µs
[cacheloader] 2016/03/19 16:46:34 reading file
/root/.influxdb/wal/_internal/monitor/4/_00078.wal, size 0
[tsm1wal] 2016/03/19 16:46:34 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/19 16:46:34 tsm1 WAL writing to
/root/.influxdb/wal/_internal/monitor/8
[filestore]2016/03/19 16:46:34
/root/.influxdb/data/_internal/monitor/8/000000001-000000001.tsm (#0)
opened in 461.768µs
[cacheloader] 2016/03/19 16:46:34 reading file
/root/.influxdb/wal/_internal/monitor/8/_00079.wal, size 0
[tsm1wal] 2016/03/19 16:46:34 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/19 16:46:34 tsm1 WAL writing to
/root/.influxdb/wal/cadvisor/default/10
[filestore]2016/03/19 16:46:34
/root/.influxdb/data/cadvisor/default/10/000000042-000000001.tsm (#6)
opened in 2.548734ms
[filestore]2016/03/19 16:46:34
/root/.influxdb/data/cadvisor/default/10/000000022-000000003.tsm (#0)
opened in 5.541579ms
[filestore]2016/03/19 16:46:34
/root/.influxdb/data/cadvisor/default/10/000000027-000000003.tsm (#1)
opened in 2.655972ms
[filestore]2016/03/19 16:46:34
/root/.influxdb/data/cadvisor/default/10/000000039-000000003.tsm (#4)
opened in 2.612483ms
[filestore]2016/03/19 16:46:34
/root/.influxdb/data/cadvisor/default/10/000000041-000000002.tsm (#5)
opened in 2.808836ms
[filestore]2016/03/19 16:46:34
/root/.influxdb/data/cadvisor/default/10/000000035-000000003.tsm (#3)
opened in 6.365532ms
[filestore]2016/03/19 16:46:34
/root/.influxdb/data/cadvisor/default/10/000000031-000000003.tsm (#2)
opened in 2.983314ms
[cacheloader] 2016/03/19 16:46:34 reading file
/root/.influxdb/wal/cadvisor/default/10/_00086.wal, size 729182
[cacheloader] 2016/03/19 16:46:35 reading file
/root/.influxdb/wal/cadvisor/default/10/_00168.wal, size 0
[tsm1] 2016/03/19 16:46:35 beginning level 3 compaction of group 0, 4 TSM
files
[tsm1] 2016/03/19 16:46:35 compacting level 3 group (0)
/root/.influxdb/data/cadvisor/default/10/000000022-000000003.tsm (#0)
[tsm1] 2016/03/19 16:46:35 compacting level 3 group (0)
/root/.influxdb/data/cadvisor/default/10/000000027-000000003.tsm (#1)
[tsm1] 2016/03/19 16:46:35 compacting level 3 group (0)
/root/.influxdb/data/cadvisor/default/10/000000031-000000003.tsm (#2)
[tsm1] 2016/03/19 16:46:35 compacting level 3 group (0)
/root/.influxdb/data/cadvisor/default/10/000000035-000000003.tsm (#3)
[tsm1wal] 2016/03/19 16:46:37 tsm1 WAL starting with 10485760 segment size
[tsm1wal] 2016/03/19 16:46:37 tsm1 WAL writing to
/root/.influxdb/wal/cadvisor/default/7
[filestore]2016/03/19 16:46:37
/root/.influxdb/data/cadvisor/default/7/000000013-000000002.tsm (#4) opened
in 121.353µs
run: open server: open tsdb store: failed to open shard 7: open engine:
error opening memory map for file
/root/.influxdb/data/cadvisor/default/7/000000013-000000002.tsm: cannot
allocate memory


Thanks for any help or advice,
Blade.
Post by v***@gmail.com
Hi,
I am running influxdb 0.9.6 on a Cubox i4 (ARM processor) on Arch Linux
ARM. It is built on the device from AUR, so no prepackaged binaries. First
few days everything went fine, but the process seems to have crashed. Now
2016/01/09 11:17:56 InfluxDB starting, version 0.9.6.1, branch 0.9.6,
commit 6d3a8603cfdaf1a141779ed88b093dcc5c528e5e, built unknown
2016/01/09 11:17:56 Go version go1.5.2, GOMAXPROCS set to 4
2016/01/09 11:17:56 Using configuration at: /etc/influxdb.conf
[metastore] 2016/01/09 11:17:56 Using data dir: /mnt/ext1/influxdb/meta
[metastore] 2016/01/09 11:17:56 Skipping cluster join: already member of
cluster: nodeId=1 raftEnabled=true peers=[localhost:8087]
[metastore] 2016/01/09 11:17:56 Node at localhost:8087 [Follower]
[metastore] 2016/01/09 11:17:58 Node at localhost:8087 [Leader]. peers=[localhost:8087]
[metastore] 2016/01/09 11:17:58 spun up monitoring for 1
[store] 2016/01/09 11:17:58 Using data dir: /mnt/ext1/influxdb/data
[wal] 2016/01/09 11:17:58 WAL starting with 30720 ready series size, 0.50
compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:17:58 WAL writing to
/mnt/ext1/influxdb/wal/kronos_dev/default/3
...(a lot of WAL writing entries appear here)...
[wal] 2016/01/09 11:18:01 WAL writing to
/mnt/ext1/influxdb/wal/_internal/monitor/6
[wal] 2016/01/09 11:18:02 WAL starting with 30720 ready series size, 0.50
compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:18:02 WAL writing to
/mnt/ext1/influxdb/wal/_internal/monitor/9
[wal] 2016/01/09 11:18:17 WAL starting with 30720 ready series size, 0.50
compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:18:17 WAL writing to
/mnt/ext1/influxdb/wal/kronos/default/16
[monitor] 2016/01/09 11:18:17 shutting down monitor system
[handoff] 2016/01/09 11:18:17 shutting down hh service
network connection closed
network connection closed
[subscriber] 2016/01/09 11:18:17 closed service
cannot allocate memory
total used free shared buff/cache
available
Mem: 2014 159 1733 0 121
1822
Swap: 2047 22 2025
After starting influxd, it quits when its memory consumption reaches about
500 MB, so there is still 1 GB RAM and 2 GB swap available. As far as I
know, I have no limits on maximum memory usage ("ulimit -v" returns
unlimited).
Any ideas? Is there some configuration option I am not aware of?
--
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups "InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to influxdb+***@googlegroups.com.
To post to this group, send email to ***@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/5985b17f-1dad-454f-9c3f-253ef579c67c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Blade Doyle
2016-03-25 07:16:20 UTC
Permalink
Same issue seen in
2016/03/25 07:13:31 InfluxDB starting, version 0.11.0, branch 0.11, commit
1572060c6890f5c6f6e540155d99238aca8617e3
--
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups "InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to influxdb+***@googlegroups.com.
To post to this group, send email to ***@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/a5b9c0bb-0659-41e5-966d-053fea2d82ba%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Continue reading on narkive:
Loading...