Microservice observability with metrics

Building observability into microservices externalizes the internal status of a system to enable operations teams to monitor microservice systems more effectively. It is important that microservices are written to produce metrics that can be used by operations teams when the microservices are running in production. MicroProfile Metrics provides a /metrics endpoint from which you can access all metrics emitted by the Open Liberty server and deployed applications.

When the application is running, you can view your metrics from any browser by visiting, for example, https://localhost:9443/metrics. You can further narrow down the scope of the metric data by accessing the /metrics/base, /metrics/application, and /metrics/vendor endpoints. By default, metric data is emitted in Prometheus format. Metric data can be retrieved in JSON format by configuring the Accept header of your request to the application/json value. A GET request returns a list of metrics, and an OPTIONS request returns a list of metrics with their metadata.

Operations teams can gather the metrics and store them in a database by using tools like Prometheus. They can then visualize the metrics in dashboards, such as Grafana, to analyze the data.

Adding metrics to your applications

To add metrics to your applications, you must create and register metrics with the application registry so that they are known to the system and can be reported on from the /metrics endpoint. The easiest way to add metrics to your application is by using metrics annotations. MicroProfile Metrics defines annotations that enable you to quickly build metrics into your code. These metrics ultimately provide transparency for operations teams into how services are running.

Metrics and annotations

The following examples describe several types of metrics and how their corresponding annotations are used.


A counter metric is used to keep an incremental count. The initial value of the counter is set to 0, and the metric increments each time that an annotated element is hit.


This annotation is used for marking a method, constructor, or type as a counter. The counter increments monotonically, counting total invocations of the annotated method:

@Counted(name="no", displayName="No donation count", description="Number of people that declined to donate.")
public String noDonation() {
    return "Maybe next time!";


A timer metric is used to aggregate timing durations, in nanoseconds, and provide duration and throughput statistics.


This annotation is used for marking a constructor or method as a timer. The timer tracks how frequently the annotated object is started and how long the invocations take to complete:

    displayName="Donations Via Credit Cards",
    description = "Donations that were made using a credit card")
public String donateAmountViaCreditCard(@FormParam("amount") Long amount, @FormParam("card") String card) {

    if (processCard(card, amount))
        return "Thanks for donating!";

    return "Sorry, please try again.";

Simple timers

A simple timer metric tracks the elapsed timing duration and invocation counts.

This type of metric is available beginning in MicroProfile Metrics 2.3. The simple timer is a lightweight alternative to the performance-heavy timer metric.


This annotation is used for marking a method, constructor, or type as a simple timer. The simple timer tracks how frequently the annotated object is started and how long the invocations take to complete:

@SimplyTimed(name = "weatherSimplyTimed", displayName="Weather data", description="Provides weather data in JSON")
public JSON getWeatherData() {


A meter metric is used to track throughput. This metric provides the following information:

  • Mean throughput

  • One/five/fifteen minute exponentially weighted moving average throughput

  • A count of the number of measurements


This annotation is used for marking a constructor or method as a meter. The meter counts the invocations of the annotated constructor or method and tracks how frequently they are called:

@Metered(displayName="Rate of donations", description="Rate of incoming donations (the instances not the amount)")
public void addDonation(Long amount) {
    totalDonations += amount;


A gauge metric is implemented by the developer in a way that allows them to be sampled to obtain a particular value. For example, you might use a gauge to measure CPU temperature or disk usage.


This annotation is used for marking a method as a gauge:

    displayName="Total Donations",
    description="Total amount of money raised for charity!",
    unit = "dollars",
public Long getTotalDonations(){
    return totalDonations;

Concurrent gauges

A concurrent gauge metric is used to keep a count of concurrent invocations of an annotated element. This metric also track the high and low watermarks of each invocation. For each invocation of an annotated element, the count increments upon entry and decrements upon exit.


This annotation is used for marking a method as a concurrent gauge. The concurrent gauge increments when the annotated method is called and decrements when the annotated method returns, counting current invocations of the annotated method:

@ConcurrentGauge(name = "liveStreamViewers", displayName="Donation live stream viewers", description="Number of active viewers for the donation live stream")
public void donationLiveStream() {

These types of metrics are available to add to your applications to make them observable. In production, operations teams can use these metrics to monitor the application, along with metrics that are automatically emitted from the JVM and the Open Liberty server runtime. If you’re interested in learning more about using MicroProfile Metrics to build observability into your microservices, see the Open Liberty guide for Providing metrics from a microservice.