Prometheus targets down

What should you do if they are DOWN?

Hi @dkhockey5 and welcome to our community forum!

What do you mean with ‘DOWN’? Can you add a screenshot?

it looks like the package is not running. Check the package status at http://my.dappnode/#/packages and restart the package if needed from there

It’s still showing up as being DOWN. I attached a picture of all the packages I currently have running. I also tried rebooting the whole system. Is there a guide to setting up DMS that I’m unaware of? I wondering if it could be a port forwarding issue or something silly.

That should’ve worked afaik.

Have you checked the Prysm logs or the Prysm UI in case there are other error messages around there? Are your validators running smoothly?

So I woke up this morning, loaded up dms.dappnode, and noticed there were two new folders (prysm dashboards, and prysm-pyrmont dashboards). I loaded up the dashboards in each, and both were working! All of my system stats like cpu, network usage, etc are showing ‘no data’. I loaded up http://prometheus.dms.dappnode:9090/targets and now there are four services listed and they are all listed as UP. Honestly, I’m not sure what has changed since I last tried. I did open ports 8080-8081 on my router two nights ago (which was the last thing I did before giving up for a day). Any idea why I can’t view system stats in grafana? Is there a system stats dashboard json I can import, or should there be one showing up that I’m missing? Thanks for your help!

That’s very probably due to the last update on both packages that was recently released. Your DAppNode Autoupdates where executed before you looking at them this morning. Mine are auto-scheduled too:

Can you show us where exactly? It’s always recommended to include pictures.

You should be able to see this (if you have dappnode exporter package as well as dms)

Clicking on HOST will take you to the system dasboards view (or clicking here)

The screenshots show what I see with the pyrmont dashboard. I do not have any ‘dappnode-exporter dashboards’ even though I have dappnode-exporter package installed (I also restarted the package to see if that would work). I copies my log from dappnode-exporter and this is what I got:

2020-12-18 21:53:53,563 INFO Set uid to user 0 succeeded
2020-12-18 21:53:53,572 INFO RPC interface ‘supervisor’ initialized
2020-12-18 21:53:53,572 INFO supervisord started with pid 1
2020-12-18 21:53:54,575 INFO spawned: ‘cadvisor’ with pid 7
2020-12-18 21:53:54,581 INFO spawned: ‘node_exporter’ with pid 8
level=info ts=2020-12-18T21:53:54.628Z caller=node_exporter.go:177 msg=“Starting node_exporter” version="(version=1.0.1, branch=, revision=)"
level=info ts=2020-12-18T21:53:54.628Z caller=node_exporter.go:178 msg=“Build context” build_context="(go=go1.14.6, user=, date=)"
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:105 msg=“Enabled collectors”
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=arp
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=bcache
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=bonding
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=btrfs
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=conntrack
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=cpu
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=cpufreq
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=diskstats
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=edac
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=entropy
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=filefd
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=filesystem
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=hwmon
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=infiniband
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=ipvs
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=loadavg
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=mdadm
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=meminfo
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=netclass
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=netdev
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=netstat
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=nfs
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=nfsd
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=powersupplyclass
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=pressure
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=rapl
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=schedstat
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=sockstat
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=softnet
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=stat
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=textfile
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=thermal_zone
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=time
level=info ts=2020-12-18T21:53:54.629Z caller=node_exporter.go:112 collector=timex
level=info ts=2020-12-18T21:53:54.630Z caller=node_exporter.go:112 collector=udp_queues
level=info ts=2020-12-18T21:53:54.630Z caller=node_exporter.go:112 collector=uname
level=info ts=2020-12-18T21:53:54.630Z caller=node_exporter.go:112 collector=vmstat
level=info ts=2020-12-18T21:53:54.630Z caller=node_exporter.go:112 collector=xfs
level=info ts=2020-12-18T21:53:54.630Z caller=node_exporter.go:112 collector=zfs
level=info ts=2020-12-18T21:53:54.630Z caller=node_exporter.go:191 msg=“Listening on” address=:9100
level=info ts=2020-12-18T21:53:54.630Z caller=tls_config.go:170 msg=“TLS is disabled and it cannot be enabled on the fly.” http2=false
W1218 21:53:54.831099 7 manager.go:256] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory
2020-12-18 21:53:55,832 INFO success: cadvisor entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-12-18 21:53:55,832 INFO success: node_exporter entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

You should be able to find it here:

Do you still have any questions or issues? Are all prometheus targets up?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.