Was this page helpful?
You're viewing documentation for a previous version of ScyllaDB Monitoring. Switch to the latest stable version.
The following instructions will help to deploy Scylla Monitoring Stack in cases where you can not use the recommended Docker version.
Please note, Scylla recommends you use the Docker version as it will provide you with most updated, current Scylla Monitoring system.
Scylla Monitoring uses the following components:
The common scenario for users who use their own standalone installation, is that they already have such a server and would like to consolidate. We assume that that you already have Prometheus and Grafana running but we will add minimal installation instruction for all componenents.
We suggest that you follow the installation instruction of each of those products from their official documentation. It is also recommended that all servers will run as a service.
CPU - at least 2 physical cores/ 4vCPUs
Memory - 15GB+ DRAM and proportional to the number of cores.
Disk - persistent disk storage is proportional to the number of cores and Prometheus retention period (see the following section)
Network - 1GbE/10GbE preferred
Prometheus storage disk performance requirements: persistent block volume, for example an EC2 EBS volume
Prometheus storage disk volume requirement: proportional to the number of metrics it holds. The default retention period is 15 days, and the disk requirement is around 12MB per core per day, assuming the default scraping interval of 20s.
For example, when monitoring a 6 node Scylla cluster, each with 16 CPU cores (so a total of 96 cores), and using the default 15 days retention time, you will need minimal disk space for prometheus of
6 * 16 * 15 * 12MB ~ 16GB
To account for unexpected events, like replacing or adding nodes, we recommend allocating at least x2-3 space, in this case, ~50GB. Prometheus Storage disk does not have to be as fast as Scylla disk, and EC2 EBS, for example, is fast enough and provides HA out of the box.
Prometheus uses more memory when querying over a longer duration (e.g. looking at a dashboard on a week view would take more memory than on an hourly duration).
For Prometheus alone, you should have 60MB of memory per core in the cluster and it would use about 600MB of virtual memory per core. Because Prometheus is so memory demanding, it is a good idea to add swap, so queries with a longer duration would not crash the server.
The main item to set an alert on is the available disk space in the monitoring system. Data is indefinitely accrued on the Prometheus data directory. The current monitoring solution does not churn data.
Confirm before installing, that your Grafana and Prometheus versions are supported by the Scylla Monitoring Stack version you want to install. Scylla-Monitoring follows the latest Prometheus and Grafana releases tightly. See the Scylla Monitoring Stack Compatibility Matrix.
The following procedure uses a
CentOS 7 based instance
Download the latest Scylla Monitoring Stack release.
Open the tar
tar -xvf scylla-monitoring-*.tar.gz
Tested with alertmanager 0.22.2 version
tar -xvf alertmanager-*.linux-amd64.tar.gz
Copy the following file:
prometheus/ directory to
alertmanager.yml in the alertmanager installation directory.
cp -p /home/centos/scylla-monitoring-scylla-monitoring-4.1.0/prometheus/rule_config.yml alertmanager-0.22.2.linux-amd64/alertmanager.yml
Start the Alertmanager
Verify that Alertmanager is up and running, point your browser to the Alertmanager IP:Port
Loki is a log aggregation system inspired by Prometheus. Scylla Monitoring uses Loki for alerts and metrics generation. It does not replaces your centralized logging server, but it can, check Loki-Grafana documentation if you want to use it for centralized log collection.
We recomand using Loki with containers, but you can install it locally as described in Loki installation
You will need to run both Loki and Promtail. Loki responsible for log parsing and acts as a Grafana and Proemtheus data-source and Generate alerts that are sent to the Alertmanager.
Promtail load logs into Loki, there are multiple ways of doing that, we suggest to use of rsyslog, this way you can add Promtail (and Loki) as a second log collection server.
Loki Related files
Loki has a configuration file and a rule file. You need to copy and modify the configuration.
mkdir -p /etc/loki/rules
mkdir -p /etc/loki/config
cp loki/rules/* /etc/loki/rules
cp loki/conf/loki-config.template.yaml /etc/loki/config/loki-config.yaml
/etc/loki/config/loki-config.yaml and replace
ALERTMANAGER with the alertmanager ip:port (i.e. localhost:9093)
Promtail Related files
Promtail has a configuration file. You need to copy and modify the configuration.
mkdir -p /etc/promtail/
/etc/promtail/config.yml and replace
LOKI_IP with Loki’s ip:port (i.e. localhost:3100)
Tested with Prometheus version 2.27.1
If you already have a prometheus server, beside the expected scrap jobs, make sure you take the Prometheus rules directory. The files not only contains important alerts, they are containing recording rules, without it different asspects of the dashboards will not work.
tar -xvf prometheus-*.linux-amd64.tar.gz
Create Data and Config directories
mkdir -p /prometheus/data
mkdir -p /etc/prometheus/prom_rules/
mkdir -p /etc/scylla.d/prometheus/
Copy the following files:
prometheus/ directory to Prometheus installation directory.
cp scylla-monitoring-scylla-monitoring-4.1.0/prometheus/prom_rules/*.yml /etc/prometheus/prom_rules/
cp scylla-monitoring-scylla-monitoring-4.1.0/prometheus/prometheus.yml.template /etc/prometheus/prometheus.yml
prometheus.yml file to point to the correct static data sources.
Make sure to include the
honor_labels: false parameter in the prometheus.yml file.
Set the alertmanger address and port by replacing
AM_ADDRESS in the file.
For example if the alertmanager will run on the same host:
Replace the files in to point to the right local file, typically for scylla, node_exporter and manager_agent you can use the same file (scylla_servers.yml).
For example the scrape config for Scylla:
scrape_interval: 5s # By default, scrape targets every 5 second.
scrape_timeout: 4s # Timeout before trying to scape a target again
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
- job_name: scylla
- source_labels: [__address__]
- source_labels: [__address__]
Create and set
scylla_servers.yml file point to your Scylla nodes and
scylla_manager_server.yml file to point to your Scylla Manager.
There is no need to configure
node_exporter_server. Instead, in the Prometheus scrape config of the node_exporter
you can use the same file you used for Scylla and Prometheus will assume you have a
node_exporter running on each Scylla server.
An example for those files can be found under the Prometheus directory:
You must have both files even if you are not using Scylla Manager
Add the labels for the cluster and data-center
# List Scylla end points
See the previous note about deprecating the
Start Prometheus server:
./prometheus --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path /prometheus/data
Data should start accumulate on: /prometheus/data
Verify that Prometheus is up and running, point your browser to the Prometheus IP:Port
Prometheus console should be visible
Verify that the node_exporter and scylla metrics accumulating in the server by executing a query through the console
At this point Scylla is emitting the metrics and Prometheus is able to store them.
Tested with Grafna 7.5.7
Install Grafana based on the instructions here
Depends if you installed Grafana from a repository (yum install), or if you downloaded the zip version, the directory structure will be different in the rest of the steps.
Access Scylla-Grafana-monitoring directory
Copy the plugins to the grafana plugins directory (by default
sudo cp -r grafana/plugins /var/lib/grafana/
If you installed Grafana from packages, instead of
/var/lib/grafana/ you should copy it to
public/app inside the directory you
opened Grafana in.
cp -r grafana/plugins ../grafana-7.5.7/public/app
Provision the Dashboards
For example Scylla Open-source version 4.5 and Scylla manager version 2.4
For Grafana installed with
sudo cp grafana/load.yaml /etc/grafana/provisioning/dashboards/
sudo mkdir -p /var/lib/grafana/dashboards
sudo cp -r grafana/build/* /var/lib/grafana/dashboards
For Grafana installed from packages
cp -p -r grafana/build/* ../grafana-7.5.7/public/build/
cp -p grafana/load.yaml ../grafana-7.5.7/conf/provisioning/dashboards/load.4.5.yaml
cp -p grafana/load.yaml ../grafana-7.5.7/conf/provisioning/dashboards/load.manager_2.4.yaml
load.* files in
/home/centos/grafana-7.5.7/conf/provisioning/dashboards/ for the correct path,
load.4.5.yaml would point to:
A note about using folders, if you provision multiple Scylla versions, use the version as a folder name. Otherwise, no need to configure a FOLDER.
Set the data source by copy
datasource.yml and edit it
sudo cp grafana/datasource.yml /etc/grafana/provisioning/datasources/
Scylla uses a plugin to read from some system tables see the section below about using it.
For Grafana installed from packages
cp -p grafana/datasource.yml /home/centos/grafana-7.5.7/conf/provisioning/datasources/
You should set the Prometheus and the alertmanager IP and port.
sudo cat /etc/grafana/provisioning/datasources/datasource.yml
- name: prometheus
- name: alertmanager
Start the Grafana service
For Grafana installed with yum install
sudo service grafana-server start
For Grafana installed from packages:
cp -p /home/centos/grafana-7.5.7/conf/sample.ini /home/centos/grafana-7.5.7/conf/scylla.ini
Edit scylla.ini to reflect the right paths in the paths section of the file.
plugins = /home/centos/grafana-7.5.7/data/plugins
provisioning = /home/centos/grafana-7.5.7/conf/provisioning
Start the server:
./bin/grafana-server -config /home/centos/grafana-7.5.7/conf/scylla.ini
Make sure Grafana is running
Point your browser to the Grafana server port 3000, the assumption is that Grafana and Prometheus are collocated on the same server.
Scylla Monitoring uses a plugin to read from some of the System tables. For the plugin to work it needs to be installed, configured and there should be a CQL connectivity between the Scylla Monitoring and the Scylla servers.
Because the plugin gives access to the Scylla tables, we strongly encourage you to add a user with read-only access restricted to the system keyspace and configure the plugin to use that user.
This part is optional, but is highly recommended. The instruction at enable authorization covers all the following items in details.
If you have not done so, enable authorization first.
Add a new ROLL for the scylla monitoring:
CREATE ROLE scylla_monitoring WITH PASSWORD = 'scylla_monitoring' AND LOGIN = true; make sure to give it a proper password.
Add SELECT permissions to the new user:
GRANT SELECT on KEYSPACE system TO scylla_monitoring;
Grafana reads plugins from its plugin directory, copy Scylla Plugin from ‘grafana/plugins/scylla-datasource’ as described in the Grafana installation section.
Add an entry to the datasource.yml file
- name: scylla-datasource
# user: 'scylla_monitoring'
# password: 'scylla_monitoring'
As mentioned previously it is safer to use a dedicated user/password for the monitoring with limited access privileges. Un-comment the relevant section if you do so, note that the username/password are only visible in the file.
Grafana will not load unsigned plugins, for that you will need to enable it with Grafana. Edit Grafana
grafana.ini file and add
See more about it the Grafana configurtion.
On this page