System Architecture
The following diagram gives an overview of the overall Prometheus system architecture:
An organization will typically run one or more Prometheus servers, which form the heart of a Prometheus monitoring setup. You configure a Prometheus server to discover a set of metrics sources (so-called "targets") using service-discovery mechanisms such as DNS, Consul, Kubernetes, or others. Prometheus then periodically pulls (or "scrapes") metrics in a text-based format from these targets over HTTP and stores the collected data in a local time series database. A target can either be an application that directly tracks and exposes Prometheus metrics about itself, or it could be an intermediary piece of software (a so-called "exporter") that translates metrics from an existing system (like a MySQL server) into the Prometheus metrics exposition format. The Prometheus server then makes the collected data available for queries, either via its built-in web UI, using dashboarding tools such as Grafana, or by direct use of its HTTP API.
Note: Each scrape only transfers the current value of every time series of a target to Prometheus, so the scrape interval determines the final sampling frequency of the stored data. The target processes do not retain any historical metrics data themselves.
You can also configure the Prometheus server to generate alerts based on the collected data. However, Prometheus does not send alert notifications directly to humans. Instead, it forwards the raw alerts to the Prometheus Alertmanager, which runs as a separate service. The Alertmanager may receive alerts from multiple (or all) Prometheus servers in the organization, and provides a central place to group, aggregate, and route those alerts. Finally, it sends out notifications via email, Slack, PagerDuty, or other notification services.