v***@gmail.com
2016-01-09 10:43:01 UTC
Hi,
I am running influxdb 0.9.6 on a Cubox i4 (ARM processor) on Arch Linux ARM. It is built on the device from AUR, so no prepackaged binaries. First few days everything went fine, but the process seems to have crashed. Now when I try to restart influxd, it quits with a memory allocation error:
2016/01/09 11:17:56 InfluxDB starting, version 0.9.6.1, branch 0.9.6, commit 6d3a8603cfdaf1a141779ed88b093dcc5c528e5e, built unknown
2016/01/09 11:17:56 Go version go1.5.2, GOMAXPROCS set to 4
2016/01/09 11:17:56 Using configuration at: /etc/influxdb.conf
[metastore] 2016/01/09 11:17:56 Using data dir: /mnt/ext1/influxdb/meta
[metastore] 2016/01/09 11:17:56 Skipping cluster join: already member of cluster: nodeId=1 raftEnabled=true peers=[localhost:8087]
[metastore] 2016/01/09 11:17:56 Node at localhost:8087 [Follower]
[metastore] 2016/01/09 11:17:58 Node at localhost:8087 [Leader]. peers=[localhost:8087]
[metastore] 2016/01/09 11:17:58 spun up monitoring for 1
[store] 2016/01/09 11:17:58 Using data dir: /mnt/ext1/influxdb/data
[wal] 2016/01/09 11:17:58 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:17:58 WAL writing to /mnt/ext1/influxdb/wal/kronos_dev/default/3
...(a lot of WAL writing entries appear here)...
[wal] 2016/01/09 11:18:01 WAL writing to /mnt/ext1/influxdb/wal/_internal/monitor/6
[wal] 2016/01/09 11:18:02 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:18:02 WAL writing to /mnt/ext1/influxdb/wal/_internal/monitor/9
[wal] 2016/01/09 11:18:17 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:18:17 WAL writing to /mnt/ext1/influxdb/wal/kronos/default/16
[monitor] 2016/01/09 11:18:17 shutting down monitor system
[handoff] 2016/01/09 11:18:17 shutting down hh service
[metastore] 2016/01/09 11:18:17 RPC listener accept error and closed: network connection closed
[metastore] 2016/01/09 11:18:17 exec listener accept error and closed: network connection closed
[subscriber] 2016/01/09 11:18:17 closed service
run: open server: open tsdb store: failed to open shard 4: new engine: cannot allocate memory
Curious thing is, there is memory available. Before starting influxd, "free -m" returns:
total used free shared buff/cache available
Mem: 2014 159 1733 0 121 1822
Swap: 2047 22 2025
After starting influxd, it quits when its memory consumption reaches about 500 MB, so there is still 1 GB RAM and 2 GB swap available. As far as I know, I have no limits on maximum memory usage ("ulimit -v" returns unlimited).
Any ideas? Is there some configuration option I am not aware of?
I am running influxdb 0.9.6 on a Cubox i4 (ARM processor) on Arch Linux ARM. It is built on the device from AUR, so no prepackaged binaries. First few days everything went fine, but the process seems to have crashed. Now when I try to restart influxd, it quits with a memory allocation error:
2016/01/09 11:17:56 InfluxDB starting, version 0.9.6.1, branch 0.9.6, commit 6d3a8603cfdaf1a141779ed88b093dcc5c528e5e, built unknown
2016/01/09 11:17:56 Go version go1.5.2, GOMAXPROCS set to 4
2016/01/09 11:17:56 Using configuration at: /etc/influxdb.conf
[metastore] 2016/01/09 11:17:56 Using data dir: /mnt/ext1/influxdb/meta
[metastore] 2016/01/09 11:17:56 Skipping cluster join: already member of cluster: nodeId=1 raftEnabled=true peers=[localhost:8087]
[metastore] 2016/01/09 11:17:56 Node at localhost:8087 [Follower]
[metastore] 2016/01/09 11:17:58 Node at localhost:8087 [Leader]. peers=[localhost:8087]
[metastore] 2016/01/09 11:17:58 spun up monitoring for 1
[store] 2016/01/09 11:17:58 Using data dir: /mnt/ext1/influxdb/data
[wal] 2016/01/09 11:17:58 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:17:58 WAL writing to /mnt/ext1/influxdb/wal/kronos_dev/default/3
...(a lot of WAL writing entries appear here)...
[wal] 2016/01/09 11:18:01 WAL writing to /mnt/ext1/influxdb/wal/_internal/monitor/6
[wal] 2016/01/09 11:18:02 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:18:02 WAL writing to /mnt/ext1/influxdb/wal/_internal/monitor/9
[wal] 2016/01/09 11:18:17 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 52428800 partition size threshold
[wal] 2016/01/09 11:18:17 WAL writing to /mnt/ext1/influxdb/wal/kronos/default/16
[monitor] 2016/01/09 11:18:17 shutting down monitor system
[handoff] 2016/01/09 11:18:17 shutting down hh service
[metastore] 2016/01/09 11:18:17 RPC listener accept error and closed: network connection closed
[metastore] 2016/01/09 11:18:17 exec listener accept error and closed: network connection closed
[subscriber] 2016/01/09 11:18:17 closed service
run: open server: open tsdb store: failed to open shard 4: new engine: cannot allocate memory
Curious thing is, there is memory available. Before starting influxd, "free -m" returns:
total used free shared buff/cache available
Mem: 2014 159 1733 0 121 1822
Swap: 2047 22 2025
After starting influxd, it quits when its memory consumption reaches about 500 MB, so there is still 1 GB RAM and 2 GB swap available. As far as I know, I have no limits on maximum memory usage ("ulimit -v" returns unlimited).
Any ideas? Is there some configuration option I am not aware of?
--
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups "InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to influxdb+***@googlegroups.com.
To post to this group, send email to ***@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/1191f4a4-bf86-4182-97fa-74e289011bec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups "InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to influxdb+***@googlegroups.com.
To post to this group, send email to ***@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/1191f4a4-bf86-4182-97fa-74e289011bec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.