Metrics reference list
The metrics reference list includes all the metrics that are available for Open Liberty. Use metric data to effectively monitor the status of your microservice systems.
Metrics are emitted from a number of different places. You can obtain them from applications, the Open Liberty runtime, and the Java virtual machine (JVM). They can be gathered and stored in database tools, such as Prometheus, and displayed on dashboards, such as Grafana. For more information about building observability into your applications, see Microservice observability with metrics.
Some metric names changed between MicroProfile Metrics 1.0 and MicroProfile Metrics 2.0. Go to the Differences between MicroProfile Metrics 1.0 and 2.0 if you need to map MicroProfile Metrics 2.0 metric information to MicroProfile Metrics 1.0 metric information.
MicroProfile Metrics 2.3 explicitly enables the Performance Monitoring feature. If you use a version earlier than 2.3 of the MicroProfile Metrics feature, then you must also enable the Performance Monitoring feature.
In the table, the Prometheus metric names are listed after each metric.
The table also lists the metric types, metric units, and descriptions of all metrics that are available for Open Liberty.
Metric units are included along with both the metric names and Prometheus names as these units can differ because the Prometheus exporter scales metrics to a base unit.
For example, while the ft.<name>.bulkhead.executionDuration
metric is recorded internally in nanoseconds, the Prometheus output is in seconds.
If no unit is listed next to the metric name, then no units are associated with that metric.
The last column of the table includes the feature or features that must be enabled to obtain that metric data.
mpMetrics-2.x name | mpMetrics-2.x Prometheus name(s) | Type | Description | Feature(s) required |
---|---|---|---|---|
classloader.loadedClasses.count |
base_classloader_loadedClasses_count |
Gauge |
The number of classes that are currently loaded in the JVM. |
|
classloader.loadedClasses.total |
base_classloader_loadedClasses_total |
Counter |
The total number of classes that were loaded since the JVM started. |
|
classloader.unloadedClasses.total |
base_classloader_unloadedClasses_total |
Counter |
The total number of classes that were unloaded since the JVM started. |
|
connectionpool.connectionHandles{datasource=<datasourceName>} |
vendor_connectionpool_connectionHandles{datasource=<dataSourceName>} |
Gauge |
The number of connections that are in use. This number might include multiple connections that are shared from a single managed connection. |
|
connectionpool.create.total{datasource=<datasourceName>} |
vendor_connectionpool_create_total{datasource=<dataSourceName>} |
Counter |
The total number of managed connections that were created since the pool creation. |
|
connectionpool.destroy.total{datasource=<datasourceName>} |
vendor_connectionpool_destroy_total{datasource=<dataSourceName>} |
Counter |
The total number of managed connections that were destroyed since the pool creation. |
|
connectionpool.freeConnections{datasource=<datasourceName>} |
vendor_connectionpool_freeConnections{datasource=<dataSourceName>} |
Gauge |
The number of managed connections in the free pool. |
|
connectionPool.inUseTime.total{datasource=<datasourceName>} / (milliseconds) |
vendor_connectionpool_inUseTime_total_seconds{datasource=<dataSourceName>} / (seconds) |
Gauge |
The total time that all connections are in-use since the start of the server. |
|
connectionpool.managedConnections{datasource=<datasourceName>} |
vendor_connectionpool_managedConnections{datasource=<dataSourceName>} |
Gauge |
The current sum of managed connections in the free, shared, and unshared pools. |
|
connectionpool.queuedRequests.total{datasource=<datasourceName>} |
vendor_connectionpool_queuedRequests_total{datasource=<dataSourceName>} |
Counter |
The total number of connection requests that waited for a connection because of a full connection pool since the start of the server. |
|
connectionPool.usedConnections.total{datasource=<datasourceName>} |
vendor_connectionpool_usedConnections_total{datasource=<dataSourceName>} |
Counter |
The total number of connection requests that waited because of a full connection pool or did not wait since the start of the server. Any connections that are currently in use are not included in this total. |
|
connectionpool.waitTime.total{datasource=<datasourceName>} / (milliseconds) |
vendor_connectionpool_waitTime_total_seconds{datasource=<dataSourceName>} / (seconds) |
Gauge |
The total wait time on all connection requests since the start of the server. |
|
cpu.availableProcessors |
base_cpu_availableProcessors |
Gauge |
The number of processors available to the JVM. |
|
cpu.processCpuLoad / (percent) |
base_cpu_processCpuLoad_percent / (percent) |
Gauge |
The recent CPU usage for the JVM process. |
|
cpu.processCpuTime / (nanoseconds) |
base_cpu_processCpuTime_seconds / (seconds) |
Gauge |
The CPU time for the JVM process. |
|
cpu.systemLoadAverage |
base_cpu_systemLoadAverage |
Gauge |
The system load average for the last minute. If the system load average is not available, a negative value is displayed. |
|
ft.<name>.bulkhead.callsAccepted.total |
application_ft_<name>_bulkhead_callsAccepted_total |
Counter |
The number of calls accepted by the bulkhead. This metric is available when you use the |
|
ft.<name>.bulkhead.callsRejected.total |
application_ft_<name>_bulkhead_callsRejected_total |
Counter |
The number of calls rejected by the bulkhead. This metric is available when you use the |
|
ft.<name>.bulkhead.concurrentExecutions |
application_ft_<name>_bulkhead_concurrentExecutions |
Gauge<long> |
The number of concurrently running executions. This metric is available when you use the |
|
ft.<name>.bulkhead.executionDuration / (nanoseconds) |
application_ft_<name>_bulkhead_executionDuration_mean_seconds application_ft_<name>_bulkhead_executionDuration_max_seconds application_ft_<name>_bulkhead_executionDuration_min_seconds application_ft_<name>_bulkhead_executionDuration_stddev_seconds application_ft_<name>_bulkhead_executionDuration_seconds_count application_ft_<name>_bulkhead_executionDuration_seconds{quantile="0.5"} application_ft_<name>_bulkhead_executionDuration_seconds{quantile="0.75"} application_ft_<name>_bulkhead_executionDuration_seconds{quantile="0.95"} application_ft_<name>_bulkhead_executionDuration_seconds{quantile="0.98"} application_ft_<name>_bulkhead_executionDuration_seconds{quantile="0.99"} application_ft_<name>_bulkhead_executionDuration_seconds{quantile="0.999"} / (seconds) |
Histogram |
A histogram of the time that method executions spend holding a semaphore permit or using one of the threads from the thread pool. This metric is available when you use the |
|
ft.<name>.bulkhead.waiting.duration / (nanoseconds) |
application_ft_<name>_bulkhead_waitingDuration_mean_seconds application_ft_<name>_bulkhead_waitingDuration_max_seconds application_ft_<name>_bulkhead_waitingDuration_min_seconds application_ft_<name>_bulkhead_waitingDuration_stddev_seconds application_ft_<name>_bulkhead_waitingDuration_seconds_count application_ft_<name>_bulkhead_waitingDuration_seconds{quantile="0.5"} application_ft_<name>_bulkhead_waitingDuration_seconds{quantile="0.75"} application_ft_<name>_bulkhead_waitingDuration_seconds{quantile="0.95"} application_ft_<name>_bulkhead_waitingDuration_seconds{quantile="0.98"} application_ft_<name>_bulkhead_waitingDuration_seconds{quantile="0.99"} application_ft_<name>_bulkhead_waitingDuration_seconds{quantile="0.999"} / (seconds) |
Histogram |
A histogram of the time that method executions spend waiting in the queue. This metric is available when you use the |
|
ft.<name>.bulkhead.waitingQueue.population |
application_ft_<name>_bulkhead_waitingQueue_population |
Gauge<long> |
The number of executions currently waiting in the queue. This metric is available when you use the |
|
ft.<name>.circuitbreaker.callsFailed.total |
application_ft_<name>_circuitbreaker_callsFailed_total |
Counter |
The number of calls that ran and were considered a failure by the circuit breaker. This metric is available when you use the |
|
ft.<name>.circuitbreaker.callsPrevented.total |
application_ft_<name>_circuitbreaker_callsPrevented_total |
Counter |
The number of calls that the circuit breaker prevented from running. This metric is available when you use the |
|
ft.<name>.circuitbreaker.callsSucceeded.total |
application_ft_<name>_circuitbreaker_callsSucceeded_total |
Counter |
The number of calls that ran and were considered a success by the circuit breaker. This metric is available when you use the |
|
ft.<name>.circuitbreaker.closed.total / (nanoseconds) |
application_ft_<name>_circuitbreaker_closed_total / (nanoseconds) |
Gauge<long> |
The amount of time that the circuit breaker spent in closed state. This metric is available when you use the |
|
ft.<name>.circuitbreaker.halfOpen.total / (nanoseconds) |
application_ft_<name>_circuitbreaker_halfOpen_total / (nanoseconds) |
Gauge<long> |
The amount of time that the circuit breaker spent in half-open state. This metric is available when you use the |
|
ft.<name>.circuitbreaker.open.total / (nanoseconds) |
application_ft_<name>_circuitbreaker_open_total / (nanoseconds) |
Gauge<long> |
The amount of time that the circuit breaker spent in open state. This metric is available when you use the |
|
ft.<name>.circuitbreaker.opened.total |
application_ft_<name>_circuitbreaker_opened_total |
Counter |
The number of times that the circuit breaker moved from closed state to open state. This metric is available when you use the |
|
ft.<name>.fallback.calls.total |
application_ft_<name>_fallback_calls_total |
Counter |
The number of times the fallback handler or method was called. This metric is available when you use the |
|
ft.<name>.invocations.failed.total |
application_ft_<name>_invocations_failed_total |
Counter |
The number of times that a method was called and threw a |
|
ft.<name>.invocations.total |
application_ft_<name>_invocations_total |
Counter |
The number of times the method was called. This metric is available when you use any fault tolerance annotation. |
|
ft.<name>.retry.callsFailed.total |
application_ft_<name>_retry_callsFailed_total |
Counter |
The number of times the method was called and ultimately failed after retrying. This metric is available when you use the |
|
ft.<name>.retry.callsSucceededNotRetried.total |
application_ft_<name>_retry_callsSucceededNotRetried_total |
Counter |
The number of times the method was called and succeeded without retrying. This metric is available when you use the |
|
ft.<name>.retry.callsSucceededRetried.total |
application_ft_<name>_retry_callsSucceededRetried_total |
Counter |
The number of times the method was called and succeeded after retrying at least once. This metric is available when you use the |
|
ft.<name>.retry.retries.total |
application_ft_<name>_retry_retries_total |
Counter |
The number of times the method was retried. This metric is available when you use the |
|
ft.<name>.timeout.callsNotTimedOut.total |
application_ft_<name>_timeout_callsNotTimedOut_total |
Counter |
The number of times the method completed without timing out. This metric is available when you use the |
|
ft.<name>.timeout.callsTimedOut.total |
application_ft_<name>_timeout_callsTimedOut_total |
Counter |
The number of times the method timed out. This metric is available when you use the |
|
ft.<name>.timeout.executionDuration / (nanoseconds) |
application_ft_<name>_timeout_executionDuration_mean_seconds application_ft_<name>_timeout_executionDuration_max_seconds application_ft_<name>_timeout_executionDuration_min_seconds application_ft_<name>_timeout_executionDuration_stddev_seconds application_ft_<name>_timeout_executionDuration_seconds_count application_ft_<name>_timeout_executionDuration_seconds{quantile="0.5"} application_ft_<name>_timeout_executionDuration_seconds{quantile="0.75"} application_ft_<name>_timeout_executionDuration_seconds{quantile="0.95"} application_ft_<name>_timeout_executionDuration_seconds{quantile="0.98"} application_ft_<name>_timeout_executionDuration_seconds{quantile="0.99"} application_ft_<name>_timeout_executionDuration_seconds{quantile="0.999"} / (seconds) |
Histogram |
A histogram of the execution time for the method. This metric is available when you use the |
|
gc.time{name=<gcName>} / (milliseconds) |
base_gc_time_seconds{name="<gcType>"} / (seconds) |
Gauge |
The approximate accumulated garbage collection elapsed time. This metric displays |
|
gc.total{name=<gcName>} |
base_gc_total{name="<gcType>"} |
Counter |
The number of garbage collections that occurred. This metric displays |
|
grpc.client.receivedMessages.total{grpc=<method_signature>} |
vendor_grpc_client_receivedMessages_total |
Counter |
The number of stream messages received from the server. |
|
grpc.client.responseTime.total{grpc=<method_signature>} / (milliseconds) |
vendor_grpc_client_responseTime_total_seconds / (seconds) |
Gauge |
The response time of completed RPCs. |
|
grpc.client.rpcCompleted.total{grpc=<method_signature>} |
vendor_grpc_client_rpcCompleted_total |
Counter |
The number of RPCs completed on the client, regardless of success or failure. |
|
grpc.client.rpcStarted.total{grpc=<method_signature>} |
vendor_grpc_client_rpcStarted_total |
Counter |
The number of RPCs started on the client. |
|
grpc.client.sentMessages.total{grpc=<method_signature>} |
vendor_grpc_client_sentMessages_total |
Counter |
The number of stream messages sent by the client. |
|
grpc.server.receivedMessages.total{grpc=<service_name>} |
vendor_grpc_server_receivedMessages_total |
Counter |
The number of stream messages received from the client. |
|
grpc.server.responseTime.total{grpc=<service_name>} / (milliseconds) |
vendor_grpc_server_responseTime_total_seconds / (seconds) |
Gauge |
The response time of completed RPCs. |
|
grpc.server.rpcCompleted.total{grpc=<service_name>} |
vendor_grpc_server_rpcCompleted_total |
Counter |
The number of RPCs completed on the server, regardless of success or failure. |
|
grpc.server.rpcStarted.total{grpc=<service_name>} |
vendor_grpc_client_rpcStarted_total |
Counter |
The number of RPCs started on the server. |
|
grpc.server.sentMessages.total{grpc=<service_name>} |
vendor_grpc_server_sentMessages_total |
Counter |
The number of stream messages sent by the server. |
|
jaxws.client.checkedApplicationFaults.total{endpoint=<endpointName>} |
vendor_jaxws_client_checkedApplicationFaults_total{endpoint=<endpointName>} |
Counter |
The number of checked application faults. |
|
jaxws.client.invocations.total{endpoint=<endpointName>} |
vendor_jaxws_client_invocations_total{endpoint=<endpointName>} |
Counter |
The number of invocations to this endpoint or operation. |
|
jaxws.client.logicalRuntimeFaults.total{endpoint=<endpointName>} |
vendor_jaxws_client_logicalRuntimeFaults_total{endpoint=<endpointName>} |
Counter |
The number of logical runtime faults. |
|
jaxws.client.responseTime.total{endpoint=<endpointName>} / (milliseconds) |
vendor_jaxws_client_responseTime_total_seconds{endpoint=<endpointName>} / (seconds) |
Gauge |
The total response handling time since the start of the server. |
|
jaxws.client.runtimeFaults.total{endpoint=<endpointName>} |
vendor_jaxws_client_runtimeFaults_total{endpoint=<endpointName>} |
Counter |
The number of runtime faults. |
|
jaxws.client.uncheckedApplicationFaults.total{endpoint=<endpointName>} |
vendor_jaxws_client_uncheckedApplicationFaults_total{endpoint=<endpointName>} |
Counter |
The number of unchecked application faults. |
|
jaxws.server.checkedApplicationFaults.total{endpoint=<endpointName>} |
vendor_jaxws_server_checkedApplicationFaults_total{endpoint=<endpointName>} |
Counter |
The number of checked application faults. |
|
jaxws.server.invocations.total{endpoint=<endpointName>} |
vendor_jaxws_server_invocations_total{endpoint=<endpointName>} |
Counter |
The number of invocations to this endpoint or operation. |
|
jaxws.server.logicalRuntimeFaults.total{endpoint=<endpointName>} |
vendor_jaxws_server_logicalRuntimeFaults_total{endpoint=<endpointName>} |
Counter |
The number of logical runtime faults. |
|
jaxws.server.responseTime.total{endpoint=<endpointName>} / (milliseconds) |
vendor_jaxws_server_responseTime_total_seconds{endpoint=<endpointName>} / (seconds) |
Gauge |
The total response handling time since the start of the server. |
|
jaxws.server.runtimeFaults.total{endpoint=<endpointName>} |
vendor_jaxws_server_runtimeFaults_total{endpoint=<endpointName>} |
Counter |
The number of runtime faults. |
|
jaxws.server.uncheckedApplicationFaults.total{endpoint=<endpointName>} |
vendor_jaxws_server_uncheckedApplicationFaults_total{endpoint=<endpointName>} |
Counter |
The number of unchecked application faults. |
|
jvm.uptime / (milliseconds) |
base_jvm_uptime_seconds / (seconds) |
Gauge |
The time elapsed since the start of the JVM. |
|
memory.committedHeap / (bytes) |
base_memory_committedHeap_bytes / (bytes) |
Gauge |
The amount of memory that is committed for the JVM to use. |
|
memory.maxHeap / (bytes) |
base_memory_maxHeap_bytes / (bytes) |
Gauge |
The maximum amount of heap memory that can be used for memory management. This metric displays |
|
memory.usedHeap / (bytes) |
base_memory_usedHeap_bytes / (bytes) |
Gauge |
The amount of used heap memory. |
|
REST.request |
base_REST_request_total{class="<fully_qualified_class_name>",method="<method_signature>"} |
Simple Timer |
The number of invocations and total response time of the RESTful resource method since the start of the server. This metric is available in MicroProfile Metrics 2.3 and later. |
|
servlet.request.total{servlet=<servletName>} |
vendor_servlet_request_total{servlet=<servletname>} |
Counter |
The total number of visits to this servlet since the start of the server. |
|
servlet.responseTime.total{servlet=<servletName>} / (nanoseconds) |
vendor_servlet_responseTime_total_seconds / (seconds) |
Gauge |
The total of the servlet response time since the start of the server. |
|
session.activeSessions{appname=<appName>} |
vendor_session_activeSessions{appname=<appName>} |
Gauge |
The number of concurrently active sessions. A session is considered active if the application server is processing a request that uses that user session. |
|
session.create.total{appname=<appName>} |
vendor_session_create_total{appname=<appName>} |
Gauge |
The number of sessions that logged in since this metric was enabled. |
|
session.invalidated.total{appname=<appName>} |
vendor_session_invalidated_total{appname=<appName>} |
Counter |
The number of sessions that logged out since this metric was enabled. |
|
session.invalidatedbyTimeout.total{appname=<appName>} |
vendor_session_invalidatedbyTimeout_total{appname=<appName>} |
Counter |
The number of sessions that logged out because of a timeout since this metric was enabled. |
|
session.liveSessions{appname=<appName>} |
vendor_session_liveSessions{appname=<appName>} |
Gauge |
The number of users that are currently logged in since this metric was enabled. |
|
thread.count |
base_thread_count |
Gauge |
The current number of live threads, including both daemon and non-daemon threads. |
|
thread.daemon.count |
base_thread_daemon_count |
Gauge |
The current number of live daemon threads. |
|
thread.max.count |
base_thread_max_count |
Gauge |
The peak live thread count since the JVM started or the peak was reset. This thread count includes both daemon and non-daemon threads. |
|
threadpool.activeThreads{pool=<poolName>} |
vendor_threadpool_activeThreads{pool="<poolName>"} |
Gauge |
The number of threads that are actively running tasks. |
|
threadpool.size{pool=<poolName>} |
vendor_threadpool_size{pool="<poolName>"} |
Gauge |
The size of the thread pool. |