Metrics reference list
The metrics reference list includes all the metrics that are available for Open Liberty. Use metric data to effectively monitor the status of your microservice systems.
Metrics are emitted from a number of different places. You can obtain them from applications, the Open Liberty runtime, and the Java virtual machine (JVM). They can be gathered and stored in database tools, such as Prometheus, and displayed on dashboards, such as Grafana. For more information about building observability into your applications, see Microservice observability with metrics.
MicroProfile Metrics 2.3 explicitly enables the Performance Monitoring feature. If you use a version earlier than 2.3 of the MicroProfile Metrics feature, then you must also enable the Performance Monitoring feature.
In the following table, the Prometheus metric names are listed after each metric.
The table also lists the metric types, metric units, and descriptions of all metrics that are available for Open Liberty.
Metric units are included along with both the metric names and Prometheus names as these units can differ because the Prometheus exporter scales metrics to a base unit.
For example, while the ft.<name>.bulkhead.executionDuration
metric is recorded internally in nanoseconds, the Prometheus output is in seconds.
If no unit is listed next to the metric name, then no units are associated with that metric.
The last column of the table includes the feature or features that must be enabled to obtain that metric data.
MicroProfile Metrics 3.0 name | MicroProfile Metrics 3.0 Prometheus name(s) | Type and description | Feature(s) required | Version introduced |
---|---|---|---|---|
classloader.loadedClasses.count |
base_classloader_loadedClasses_count |
The number of classes that are currently loaded in the JVM. This metric is a gauge. |
||
classloader.loadedClasses.total |
base_classloader_loadedClasses_total |
The total number of classes that were loaded since the JVM started. This metric is a counter. |
||
classloader.unloadedClasses.total |
base_classloader_unloadedClasses_total |
The total number of classes that were unloaded since the JVM started. This metric is a counter. |
||
connectionpool.connectionHandles{datasource=<datasourceName>} |
vendor_connectionpool_connectionHandles{datasource=<dataSourceName>} |
The number of connections that are in use. This number might include multiple connections that are shared from a single managed connection. This metric is a gauge. |
||
connectionpool.create.total{datasource=<datasourceName>} |
vendor_connectionpool_create_total{datasource=<dataSourceName>} |
The total number of managed connections that were created since the pool creation. This metric is a counter. |
||
connectionpool.destroy.total{datasource=<datasourceName>} |
vendor_connectionpool_destroy_total{datasource=<dataSourceName>} |
The total number of managed connections that were destroyed since the pool creation. This metric is a counter. |
||
connectionpool.freeConnections{datasource=<datasourceName>} |
vendor_connectionpool_freeConnections{datasource=<dataSourceName>} |
The number of managed connections in the free pool. This metric is a gauge. |
||
connectionPool.inUseTime.total{datasource=<datasourceName>} / (milliseconds) |
vendor_connectionpool_inUseTime_total_seconds{datasource=<dataSourceName>} / (seconds) |
The total time that all connections are in-use since the start of the server. This metric is a gauge. |
||
connectionpool.managedConnections{datasource=<datasourceName>} |
vendor_connectionpool_managedConnections{datasource=<dataSourceName>} |
The current sum of managed connections in the free, shared, and unshared pools. This metric is a gauge. |
||
connectionpool.queuedRequests.total{datasource=<datasourceName>} |
vendor_connectionpool_queuedRequests_total{datasource=<dataSourceName>} |
The total number of connection requests that waited for a connection because of a full connection pool since the start of the server. This metric is a counter. |
||
connectionPool.usedConnections.total{datasource=<datasourceName>} |
vendor_connectionpool_usedConnections_total{datasource=<dataSourceName>} |
The total number of connection requests that waited because of a full connection pool or did not wait since the start of the server. Any connections that are currently in use are not included in this total. This metric is a counter. |
||
connectionpool.waitTime.total{datasource=<datasourceName>} / (milliseconds) |
vendor_connectionpool_waitTime_total_seconds{datasource=<dataSourceName>} / (seconds) |
The total wait time on all connection requests since the start of the server. This metric is a gauge. |
||
cpu.availableProcessors |
base_cpu_availableProcessors |
The number of processors available to the JVM. This metric is a gauge. |
||
cpu.processCpuLoad / (percent) |
base_cpu_processCpuLoad_percent / (percent) |
The recent CPU usage for the JVM process. This metric is a gauge. |
||
cpu.processCpuTime / (nanoseconds) |
base_cpu_processCpuTime_seconds / (seconds) |
The CPU time for the JVM process. This metric is a gauge. |
||
cpu.systemLoadAverage |
base_cpu_systemLoadAverage |
The system load average for the last minute. If the system load average is not available, a negative value is displayed. This metric is a gauge. |
||
ft.bulkhead.calls.total{ method="<name>", bulkheadResult=["accepted"|"rejected"] } |
base_ft_bulkhead_calls_total{ method="<name>", bulkheadResult=["accepted"|"rejected"] } |
The number of times that the bulkhead logic was run. This number is usually once per method call, but it might be zero if a circuit breaker prevents execution or more than once per method call if the method call is retried. This metric is available when you use the |
||
ft.bulkhead.executionsRunning{method="<name>"} |
base_ft_bulkhead_executionsRunning{method="<name>"} |
The number of currently running executions. This metric is available when you use the |
||
ft.bulkhead.executionsWaiting{method="<name>"} |
base_ft_bulkhead_executionsWaiting{method="<name>"} |
The number of executions currently waiting in the queue. This metric is available when you use the |
||
ft.bulkhead.runningDuration{method="<name>"} / (nanoseconds) |
base_ft_bulkhead_runningDuration_min_seconds{method="<name>"} base_ft_bulkhead_runningDuration_max_seconds{method="<name>"} base_ft_bulkhead_runningDuration_mean_seconds{method="<name>"} base_ft_bulkhead_runningDuration_stddev_seconds{method="<name>"} base_ft_bulkhead_runningDuration_seconds_count{method="<name>"} base_ft_bulkhead_runningDuration_seconds_sum{method="<name>"} base_ft_bulkhead_runningDuration_seconds{ method="<name>", quantile=["0.5"|"0.75"|"0.95"|"0.98"|"0.99"|"0.999"] } / seconds |
A histogram of the time that method executions spent running. This metric is available when you use the |
||
ft.bulkhead.waitingDuration{method="<name>"} / (nanoseconds) |
base_ft_bulkhead_waitingDuration_min_seconds{method="<name>"} base_ft_bulkhead_waitingDuration_max_seconds{method="<name>"} base_ft_bulkhead_waitingDuration_mean_seconds{method="<name>"} base_ft_bulkhead_waitingDuration_stddev_seconds{method="<name>"} base_ft_bulkhead_waitingDuration_seconds_count{method="<name>"} base_ft_bulkhead_waitingDuration_seconds_sum{method="<name>"} base_ft_bulkhead_waitingDuration_seconds{ method="<name>", quantile=["0.5"|"0.75"|"0.95"|"0.98"|"0.99"|"0.999"] } / seconds |
A histogram of the time that method executions spent waiting in the queue. This metric is available when you use the |
||
ft.circuitbreaker.calls.total{ method="<name>", circuitBreakerResult=["success"|"failure"|"circuitBreakerOpen"] } |
base_ft_circuitbreaker_calls_total{ method="<name>", circuitBreakerResult=["success"|"failure"|"circuitBreakerOpen"] } |
The number of times that the circuit breaker logic was run. This number is usually once per method call, but might be more if the method call is retried. This metric is available when you use the |
||
ft.circuitbreaker.state.total{ method="<name>", state=["open"|"closed"|"halfOpen"] } / (nanoseconds) |
base_ft_circuitbreaker_state_total_seconds{ method="<name>", state=["open"|"closed"|"halfOpen"] } / (seconds) |
The amount of time that the circuit breaker has spent in each state. These values increase monotonically. This metric is available when you use the |
||
ft.circuitbreaker.opened.total{method="<name>"} |
base_ft_circuitbreaker_opened_total{method="<name>"} |
The number of times that the circuit breaker has moved from close state to open state. This metric is available when you use the |
||
ft.invocations.total{ method="<name>", result=["valueReturned"|"exceptionThrown"], fallback=["applied"|"notApplied"|"notDefined"] } |
base_ft_invocations_total{ method="<name>", result=["valueReturned"|"exceptionThrown"], fallback=["applied"|"notApplied"|"notDefined"] } |
The number of times that the method was called. This metric is a counter. |
||
ft.retry.calls.total{ method="<name>", retried=["true"|"false"], retryResult=["valueReturned" |"exceptionNotRetryable" |"maxRetriesReached" |"maxDurationReached"] } |
base_ft_retry_calls_total{ method="<name>", retried=["true"|"false"], retryResult=["valueReturned" |"exceptionNotRetryable" |"maxRetriesReached" |"maxDurationReached"] } |
The number of times that the retry logic was run. This will always be once per method call. This metric is available when you use the |
||
ft.retry.retries.total{method="<name>"} |
base_ft_retry_retries_total{method="<name>"} |
The number of times that the method was retried. This metric is available when you use the |
||
ft.timeout.calls.total{ method="<name>", timedOut=["true"|"false"] } |
base_ft_timeout_calls_total{ method="<name>", timedOut=["true"|"false"] } |
The number of times that the timeout logic was run. This number is usually once per method call, but it might be zero if a circuit breaker prevents execution or more than once per method call if the method call is retried. This metric is available when you use the |
||
ft.timeout.executionDuration{method="<name>"} / (nanoseconds) |
base_ft_timeout_executionDuration_mean_seconds{method="<name>"} base_ft_timeout_executionDuration_max_seconds{method="<name>"} base_ft_timeout_executionDuration_min_seconds{method="<name>"} base_ft_timeout_executionDuration_stddev_seconds{method="<name>"} base_ft_timeout_executionDuration_seconds_count{method="<name>"} base_ft_timeout_executionDuration_seconds{ method="<name>", quantile=["0.5"|"0.75"|"0.95"|"0.98"|"0.99"|"0.999"] } / (seconds) |
A histogram of the execution time for the method. This metric is available when you use the |
||
gc.time{name=<gcName>} / (milliseconds) |
base_gc_time_seconds{name="<gcType>"} / (seconds) |
The approximate accumulated garbage collection elapsed time. This metric displays |
||
gc.total{name=<gcName>} |
base_gc_total{name="<gcType>"} |
The number of garbage collections that occurred. This metric displays |
||
grpc.client.receivedMessages.total{grpc=<method_signature>} |
vendor_grpc_client_receivedMessages_total |
The number of stream messages received from the server. This metric is a counter. |
||
grpc.client.responseTime.total{grpc=<method_signature>} / (milliseconds) |
vendor_grpc_client_responseTime_total_seconds / (seconds) |
The response time of completed RPCs. This metric is a gauge. |
||
grpc.client.rpcCompleted.total{grpc=<method_signature>} |
vendor_grpc_client_rpcCompleted_total |
The number of RPCs completed on the client, regardless of success or failure. This metric is a counter. |
||
grpc.client.rpcStarted.total{grpc=<method_signature>} |
vendor_grpc_client_rpcStarted_total |
The number of RPCs started on the client. This metric is a counter. |
||
grpc.client.sentMessages.total{grpc=<method_signature>} |
vendor_grpc_client_sentMessages_total |
The number of stream messages sent by the client. This metric is a counter. |
||
grpc.server.receivedMessages.total{grpc=<service_name>} |
vendor_grpc_server_receivedMessages_total |
The number of stream messages received from the client. This metric is a counter. |
||
grpc.server.responseTime.total{grpc=<service_name>} / (milliseconds) |
vendor_grpc_server_responseTime_total_seconds / (seconds) |
The response time of completed RPCs. This metric is a gauge. |
||
grpc.server.rpcCompleted.total{grpc=<service_name>} |
vendor_grpc_server_rpcCompleted_total |
The number of RPCs completed on the server, regardless of success or failure. This metric is a counter. |
||
grpc.server.rpcStarted.total{grpc=<service_name>} |
vendor_grpc_client_rpcStarted_total |
The number of RPCs started on the server. This metric is a counter. |
||
grpc.server.sentMessages.total{grpc=<service_name>} |
vendor_grpc_server_sentMessages_total |
The number of stream messages sent by the server. This metric is a counter. |
||
jaxws.client.checkedApplicationFaults.total{endpoint=<endpointName>} |
vendor_jaxws_client_checkedApplicationFaults_total{endpoint=<endpointName>} |
The number of checked application faults. This metric is a counter. |
||
jaxws.client.invocations.total{endpoint=<endpointName>} |
vendor_jaxws_client_invocations_total{endpoint=<endpointName>} |
The number of invocations to this endpoint or operation. This metric is a counter. |
||
jaxws.client.logicalRuntimeFaults.total{endpoint=<endpointName>} |
vendor_jaxws_client_logicalRuntimeFaults_total{endpoint=<endpointName>} |
The number of logical runtime faults. This metric is a counter. |
||
jaxws.client.responseTime.total{endpoint=<endpointName>} / (milliseconds) |
vendor_jaxws_client_responseTime_total_seconds{endpoint=<endpointName>} / (seconds) |
The total response handling time since the start of the server. This metric is a gauge. |
||
jaxws.client.runtimeFaults.total{endpoint=<endpointName>} |
vendor_jaxws_client_runtimeFaults_total{endpoint=<endpointName>} |
The number of runtime faults. This metric is a counter. |
||
jaxws.client.uncheckedApplicationFaults.total{endpoint=<endpointName>} |
vendor_jaxws_client_uncheckedApplicationFaults_total{endpoint=<endpointName>} |
The number of unchecked application faults. This metric is a counter. |
||
jaxws.server.checkedApplicationFaults.total{endpoint=<endpointName>} |
vendor_jaxws_server_checkedApplicationFaults_total{endpoint=<endpointName>} |
The number of checked application faults. This metric is a counter. |
||
jaxws.server.invocations.total{endpoint=<endpointName>} |
vendor_jaxws_server_invocations_total{endpoint=<endpointName>} |
The number of invocations to this endpoint or operation. This metric is a counter. |
||
jaxws.server.logicalRuntimeFaults.total{endpoint=<endpointName>} |
vendor_jaxws_server_logicalRuntimeFaults_total{endpoint=<endpointName>} |
The number of logical runtime faults. This metric is a counter. |
||
jaxws.server.responseTime.total{endpoint=<endpointName>} / (milliseconds) |
vendor_jaxws_server_responseTime_total_seconds{endpoint=<endpointName>} / (seconds) |
The total response handling time since the start of the server. This metric is a gauge. |
||
jaxws.server.runtimeFaults.total{endpoint=<endpointName>} |
vendor_jaxws_server_runtimeFaults_total{endpoint=<endpointName>} |
The number of runtime faults. This metric is a counter. |
||
jaxws.server.uncheckedApplicationFaults.total{endpoint=<endpointName>} |
vendor_jaxws_server_uncheckedApplicationFaults_total{endpoint=<endpointName>} |
The number of unchecked application faults. This metric is a counter. |
||
jvm.uptime / (milliseconds) |
base_jvm_uptime_seconds / (seconds) |
The time elapsed since the start of the JVM. This metric is a gauge. |
||
memory.committedHeap / (bytes) |
base_memory_committedHeap_bytes / (bytes) |
The amount of memory that is committed for the JVM to use. This metric is a gauge. |
||
memory.maxHeap / (bytes) |
base_memory_maxHeap_bytes / (bytes) |
The maximum amount of heap memory that can be used for memory management. This metric displays |
||
memory.usedHeap / (bytes) |
base_memory_usedHeap_bytes / (bytes) |
The amount of used heap memory. This metric is a gauge. |
||
REST.request |
base_REST_request_total{class="<fully_qualified_class_name>",method="<method_signature>"} |
The number of invocations and total response time of this RESTful resource method since the server started. The metric doesn’t record the count of invocations nor the elapsed time if an unmapped exception occurs. This metric also tracks the highest recorded time duration within the previous completed full minute and lowest recorded time duration within the previous completed full minute. This metric is a simple timer. |
||
REST.request.unmappedException.total |
base_REST_request_unmappedException_total{class="<fully_qualified_class_name>",method="<method_signature>"} |
The total number of unmapped exceptions that occur from this RESTful resource method since the server started. This metric is a counter. |
||
servlet.request.total{servlet=<servletName>} |
vendor_servlet_request_total{servlet=<servletname>} |
The total number of visits to this servlet since the start of the server. This metric is a counter. |
||
servlet.responseTime.total{servlet=<servletName>} / (nanoseconds) |
vendor_servlet_responseTime_total_seconds / (seconds) |
The total of the servlet response time since the start of the server. This metric is a gauge. |
||
session.activeSessions{appname=<appName>} |
vendor_session_activeSessions{appname=<appName>} |
The number of concurrently active sessions. A session is considered active if the application server is processing a request that uses that user session. This metric is a gauge. |
||
session.create.total{appname=<appName>} |
vendor_session_create_total{appname=<appName>} |
The number of sessions that logged in since this metric was enabled. This metric is a gauge. |
||
session.invalidated.total{appname=<appName>} |
vendor_session_invalidated_total{appname=<appName>} |
The number of sessions that logged out since this metric was enabled. This metric is a counter. |
||
session.invalidatedbyTimeout.total{appname=<appName>} |
vendor_session_invalidatedbyTimeout_total{appname=<appName>} |
The number of sessions that logged out because of a timeout since this metric was enabled. This metric is a counter. |
||
session.liveSessions{appname=<appName>} |
vendor_session_liveSessions{appname=<appName>} |
The number of users that are currently logged in since this metric was enabled. This metric is a gauge. |
||
thread.count |
base_thread_count |
The current number of live threads, including both daemon and non-daemon threads. This metric is a gauge. |
||
thread.daemon.count |
base_thread_daemon_count |
The current number of live daemon threads. This metric is a gauge. |
||
thread.max.count |
base_thread_max_count |
The peak live thread count since the JVM started or the peak was reset. This thread count includes both daemon and non-daemon threads. This metric is a gauge. |
||
threadpool.activeThreads{pool=<poolName>} |
vendor_threadpool_activeThreads{pool="<poolName>"} |
The number of threads that are actively running tasks. This metric is a gauge. |
||
threadpool.size{pool=<poolName>} |
vendor_threadpool_size{pool="<poolName>"} |
The size of the thread pool. This metric is a gauge. |