a***@gmail.com
2016-08-08 13:27:41 UTC
Hi all,
I followed this tutorial (https://influxdata.com/blog/getting-started-with-sending-statsd-metrics-to-telegraf-influxdb/) to get a statsd > telegraf > influxdb setup working. However, I've noticed that while the statsd metrics reach telegraf, they are not relayed to influxdb.
Do you have an idea of what might be the problem? Find more information below.
1) telegraf version 0.13.1 / influxdb 0.13.0
2) telegraf seems to get some statsd metrics, even though the processing time seems suspiciously short:
```
[docker] gathered metrics, (5s interval) in 2.270456355s
2016/08/08 13:16:18 Output [influxdb] buffer fullness: 900 / 20000 metrics. Total gathered metrics: 261096. Total dropped metrics: 0.
2016/08/08 13:16:18 Output [influxdb] wrote batch of 900 metrics in 104.723614ms
2016/08/08 13:16:20 Input [statsd] gathered metrics, (5s interval) in 91.964µs
2016/08/08 13:16:20 Input [memcached] gathered metrics, (5s interval) in 3.099963ms
```
3) here's the relevant configuration for telegraf.conf:
```
[[inputs.statsd]]
## Address and port to host UDP listener on
service_address = ":8125"
## Delete gauges every interval (default=false)
delete_gauges = false
## Delete counters every interval (default=false)
delete_counters = false
## Delete sets every interval (default=false)
delete_sets = false
## Delete timings & histograms every interval (default=true)
delete_timings = true
## Percentiles to calculate for timing & histogram stats
percentiles = [90]
## separator to use between elements of a statsd metric
metric_separator = "_"
## Parses tags in the datadog statsd format
## http://docs.datadoghq.com/guides/dogstatsd/
parse_data_dog_tags = false
## Statsd data translation templates, more info can be read here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#graphite
# templates = [
# "cpu.* measurement*"
# ]
## Number of UDP messages allowed to queue up, once filled,
## the statsd server will start dropping packets
allowed_pending_messages = 10000
## Number of timing/histogram values to track per-measurement in the
## calculation of percentiles. Raising this limit increases the accuracy
## of percentiles but also increases the memory usage and cpu time.
percentile_limit = 1000
```
4) here's an example of the statsd metrics being received at telegraf's host (obtained via `tcpdump`):
```
13:20:49.431549 IP (tos 0x0, ttl 64, id 2976, offset 0, flags [DF], proto UDP (17), length 92)
big43.local.60178 > macmini6.local.8125: [udp sum ok] UDP, length 64
0x0000: 4500 005c 0ba0 4000 4011 9d51 c0a8 082b E..\***@.@..Q...+
0x0010: c0a8 0824 eb12 1fbd 0048 d989 7377 6966 ...$.....H..swif
0x0020: 742d 6f62 6a65 6374 2d72 6570 6c69 6361 t-object-replica
0x0030: 746f 722e 7061 7274 6974 696f 6e2e 7570 tor.partition.up
0x0040: 6461 7465 2e74 696d 696e 673a 362e 3033 date.timing:6.03
0x0050: 3130 3336 3337 3639 357c 6d73 103637695|ms
```
note the `swiftt-object-replicator.partition.update.timing:6.04103637695|ms`. this is therefore of the statsd 'timing' type, which is supposedly supported by telegraf.
5) i've noticed that the measurements don't show up in influxdb. in fact, i don't even spot any outgoing messages to influxdb exporting the statsd metrics!
Thanks!
Damião
I followed this tutorial (https://influxdata.com/blog/getting-started-with-sending-statsd-metrics-to-telegraf-influxdb/) to get a statsd > telegraf > influxdb setup working. However, I've noticed that while the statsd metrics reach telegraf, they are not relayed to influxdb.
Do you have an idea of what might be the problem? Find more information below.
1) telegraf version 0.13.1 / influxdb 0.13.0
2) telegraf seems to get some statsd metrics, even though the processing time seems suspiciously short:
```
[docker] gathered metrics, (5s interval) in 2.270456355s
2016/08/08 13:16:18 Output [influxdb] buffer fullness: 900 / 20000 metrics. Total gathered metrics: 261096. Total dropped metrics: 0.
2016/08/08 13:16:18 Output [influxdb] wrote batch of 900 metrics in 104.723614ms
2016/08/08 13:16:20 Input [statsd] gathered metrics, (5s interval) in 91.964µs
2016/08/08 13:16:20 Input [memcached] gathered metrics, (5s interval) in 3.099963ms
```
3) here's the relevant configuration for telegraf.conf:
```
[[inputs.statsd]]
## Address and port to host UDP listener on
service_address = ":8125"
## Delete gauges every interval (default=false)
delete_gauges = false
## Delete counters every interval (default=false)
delete_counters = false
## Delete sets every interval (default=false)
delete_sets = false
## Delete timings & histograms every interval (default=true)
delete_timings = true
## Percentiles to calculate for timing & histogram stats
percentiles = [90]
## separator to use between elements of a statsd metric
metric_separator = "_"
## Parses tags in the datadog statsd format
## http://docs.datadoghq.com/guides/dogstatsd/
parse_data_dog_tags = false
## Statsd data translation templates, more info can be read here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#graphite
# templates = [
# "cpu.* measurement*"
# ]
## Number of UDP messages allowed to queue up, once filled,
## the statsd server will start dropping packets
allowed_pending_messages = 10000
## Number of timing/histogram values to track per-measurement in the
## calculation of percentiles. Raising this limit increases the accuracy
## of percentiles but also increases the memory usage and cpu time.
percentile_limit = 1000
```
4) here's an example of the statsd metrics being received at telegraf's host (obtained via `tcpdump`):
```
13:20:49.431549 IP (tos 0x0, ttl 64, id 2976, offset 0, flags [DF], proto UDP (17), length 92)
big43.local.60178 > macmini6.local.8125: [udp sum ok] UDP, length 64
0x0000: 4500 005c 0ba0 4000 4011 9d51 c0a8 082b E..\***@.@..Q...+
0x0010: c0a8 0824 eb12 1fbd 0048 d989 7377 6966 ...$.....H..swif
0x0020: 742d 6f62 6a65 6374 2d72 6570 6c69 6361 t-object-replica
0x0030: 746f 722e 7061 7274 6974 696f 6e2e 7570 tor.partition.up
0x0040: 6461 7465 2e74 696d 696e 673a 362e 3033 date.timing:6.03
0x0050: 3130 3336 3337 3639 357c 6d73 103637695|ms
```
note the `swiftt-object-replicator.partition.update.timing:6.04103637695|ms`. this is therefore of the statsd 'timing' type, which is supposedly supported by telegraf.
5) i've noticed that the measurements don't show up in influxdb. in fact, i don't even spot any outgoing messages to influxdb exporting the statsd metrics!
Thanks!
Damião
--
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups "InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to influxdb+***@googlegroups.com.
To post to this group, send email to ***@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/6ba76909-d4a5-4a58-b0c5-55e33f1462f9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups "InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to influxdb+***@googlegroups.com.
To post to this group, send email to ***@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/6ba76909-d4a5-4a58-b0c5-55e33f1462f9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.