MIcroProfile Metrics reference list
The metrics reference tables list and describe all the metrics that are available for Open Liberty. Use metric data to effectively monitor the status of your microservice systems.
You can obtain metrics from applications, the Open Liberty runtime, and the Java virtual machine (JVM). They can be gathered and stored in database tools, such as Prometheus, and displayed on dashboards, such as Grafana. For more information about building observability into your applications with MicroProfile Metrics, see Microservice observability with metrics. For more information about integrating MicroProfile Metrics 5.0 with Micrometer to send metric data to third-party monitoring systems, see Choose your own monitoring tools with MicroProfile Metrics.
MicroProfile Metrics base and vendor metrics
When the MicroProfile metrics feature is enabled, a set of base metrics is always reported. You can augment this set of metrics by collecting vendor metrics, which are available from different monitoring components within your Open Liberty server. When you enable both the Performance Monitoring and the MicroProfile metrics feature, you enable the reporting of vendor metrics on the /metrics
endpoint. The Performance Monitoring feature retrieves the statistical data from all available monitoring components. The REST
base metrics that were introduced in MicroProfile Metrics 2.3 also rely on the Performance Monitoring feature. Before MicroProfile Metrics 2.3, the Performance Monitoring feature had to be explicitly configured. In MicroProfile Metrics 2.3, and later, the Performance Monitoring feature is automatically enabled by MicroProfile Metrics during startup.
Filter metrics to gather only the data you need
By default, all monitoring components are enabled. If your server is collecting more metrics data than you need, you can improve the server performance by collecting only those vendor metrics that you intend to use. To configure only a subset of vendor metrics to be reported, specify the components that you want to monitor in the filter
attribute for the monitor
configuration element in your server.xml
file. You can identify the relevant monitoring component for each vendor metric by referencing the Monitoring component column of the metrics reference tables.
To enable only the monitoring components that are used by MicroProfile Metrics, add the following code to your server.xml
file.
<monitor filter="ConnectionPool,ThreadPool,RequestTiming,Session,WebContainer,REST,GrpcClient,GrpcServer"/>
To disable all vendor metrics, but to keep the REST
base metrics configure the server.xml
, as shown in the following example:
<monitor filter="REST"/>
To disable all monitoring components, add the following code to your server.xml
file:
<monitor filter=" "/>
Metrics reference tables
The metrics reference tables list the available metrics when you use one of the following solutions:
In each table, the Prometheus metric names are listed after each metric. The tables also list the metric types, metric units, and descriptions of all metrics that are available for Open Liberty. For the vendor metrics, the associated monitoring component that you can use to filter the metric is also included. The Features required column of the table includes the feature or features that must be enabled to obtain that metric data. The Version introduced column specifies the minimum version of the feature that you must enable to collect the metric.
Units in the metrics reference tables
Metric units are included along with both the metric names and Prometheus names. In MicroProfile 4.0 and earlier, these units can differ because the Prometheus exporter scales metrics to a base unit. For example, while the ft.<name>.bulkhead.executionDuration
metric is recorded internally in nanoseconds, the Prometheus output is in seconds. In MicroProfile Metrics 5.0, the unit that is associated with the metric is what is reflected in the Prometheus output. The metrics are not scaled to a base unit.
If no unit is listed next to the metric name, then no units are associated with that metric.
MicroProfile Metrics 5.0 metrics reference
A separate table is included for MicroProfile Metrics 5.0 because of changes in the release that affect the formatting of the Prometheus name, base metrics, and the metric types. For more information, see Differences between MicroProfile Metrics 5.0 and earlier versions: Metric output format.
The following table lists and describes the metrics that are available for Open Liberty for MicroProfile Metrics 5.0.
MicroProfile Metrics 5.0 name | MicroProfile Metrics 5.0 Prometheus name(s) | Type and description | Monitoring component | Features required | Version introduced |
---|---|---|---|---|---|
classloader.loadedClasses.count | classloader_loadedClasses_count{mp_scope="base"} | The number of classes that are currently loaded in the JVM. This metric is a gauge. | Base metric | ||
classloader.loadedClasses.total | classloader_loadedClasses_total{mp_scope="base"} | The total number of classes that were loaded since the JVM started. This metric is a counter. | Base metric | ||
classloader.unloadedClasses.total | classloader_unloadedClasses_total{mp_scope="base"} | The total number of classes that were unloaded since the JVM started. This metric is a counter. | Base metric | ||
connectionpool.connectionHandles{datasource=<datasourceName>} | connectionpool_connectionHandles{datasource="<datasourceName>",mp_scope="vendor"} | The number of connections that are in use. This number might include multiple connections that are shared from a single managed connection. This metric is a gauge. |
| ||
connectionpool.create.total{datasource=<datasourceName>} | connectionpool_create_total{datasource="<datasourceName>",mp_scope="vendor"} | The total number of managed connections that were created since the pool creation. This metric is a counter. |
| ||
connectionpool.destroy.total{datasource=<datasourceName>} | connectionpool_destroy_total{datasource="<datasourceName>",mp_scope="vendor"} | The total number of managed connections that were destroyed since the pool creation. This metric is a counter. |
| ||
connectionpool.inUseTime.per.usedConnection | connectionpool_inUseTime_per_usedConnection_seconds{datasource="<datasourceName>",mp_scope="vendor"} | The recent average time that connections are in use. This metric is a gauge. |
| ||
connectionpool.freeConnections{datasource=<datasourceName>} | connectionpool_freeConnections{datasource="<datasourceName>",mp_scope="vendor"} | The number of managed connections in the free pool. This metric is a gauge. |
| ||
connectionPool.inUseTime.total{datasource=<datasourceName>} / (seconds) | connectionpool_inUseTime_total_seconds{datasource="<datasourceName>",mp_scope="vendor"} / (seconds) | The total time that all connections are in-use since the start of the server. This metric is a gauge. |
| ||
connectionpool.managedConnections{datasource=<datasourceName>} | connectionpool_managedConnections{datasource="<datasourceName>",mp_scope="vendor"} | The current sum of managed connections in the free, shared, and unshared pools. This metric is a gauge. |
| ||
connectionpool.queuedRequests.total{datasource=<datasourceName>} | connectionpool_queuedRequests_total{datasource="<datasourceName>",mp_scope="vendor"} | The total number of connection requests that waited for a connection because of a full connection pool since the start of the server. This metric is a counter. |
| ||
connectionPool.usedConnections.total{datasource=<datasourceName>} | connectionpool_usedConnections_total{datasource="<datasourceName>",mp_scope="vendor"} | The total number of connection requests that waited because of a full connection pool or did not wait since the start of the server. Any connections that are currently in use are not included in this total. This metric is a counter. |
| ||
connectionpool.waitTime.per.queuedRequest | connectionpool_waitTime_per_queuedRequest_seconds{datasource="<datasourceName>",mp_scope="vendor"} | The recent average wait time for queued connection requests. This metric is a gauge. |
| ||
connectionpool.waitTime.total{datasource=<datasourceName>} / (seconds) | connectionpool_waitTime_total_seconds{datasource="<datasourceName>",mp_scope="vendor"} / (seconds) | The total wait time on all connection requests since the start of the server. This metric is a gauge. |
| ||
cpu.availableProcessors | cpu_availableProcessors{mp_scope="base"} | The number of processors available to the JVM. This metric is a gauge. | Base metric | ||
cpu.processCpuLoad / (percent) | cpu_processCpuLoad_percent{mp_scope="base"} / (percent) | The recent CPU usage for the JVM process. This metric is a gauge. | Base metric | ||
cpu.processCpuTime / (seconds) | cpu_processCpuTime_seconds{mp_scope="base"} / (seconds) | The CPU time for the JVM process. This metric is a gauge. | Base metric | ||
cpu.processCpuUtilization | cpu_processCpuUtilization_percent{mp_scope="vendor"} | The recent CPU time that is used by the JVM process from all processors that are available to the JVM. The value is between | Base metric | ||
cpu.systemLoadAverage | cpu_systemLoadAverage{mp_scope="base"} | The system load average for the last minute. If the system load average is not available, a negative value is displayed. This metric is a gauge. | Base metric | ||
ft.bulkhead.calls.total{ method="<name>", bulkheadResult=["accepted"|"rejected"] } | ft_bulkhead_calls_total{ method="<name>", mp_scope="base", bulkheadResult=["accepted"|"rejected"] } | The number of times that the bulkhead logic was run. This number is usually once per method call, but it might be zero if a circuit breaker prevents execution or more than once per method call if the method call is retried. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.bulkhead.executionsRunning{method="<name>"} | ft_bulkhead_executionsRunning{method="<name>",mp_scope="base"} | The number of currently running executions. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.bulkhead.executionsWaiting{method="<name>"} | ft_bulkhead_executionsWaiting{method="<name>",mp_scope="base"} | The number of executions currently waiting in the queue. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.bulkhead.runningDuration{method="<name>"} / (nanoseconds) | ft_bulkhead_runningDuration_seconds_max{method="<name>",mp_scope="base"} ft_bulkhead_runningDuration_seconds_count{method="<name>",mp_scope="base"} ft_bulkhead_runningDuration_seconds_sum{method="<name>",mp_scope="base"} ft_bulkhead_runningDuration_seconds{ method="<name>", mp_scope="base", quantile=["0.5"|"0.75"|"0.95"|"0.98"|"0.99"|"0.999"] } / seconds | A histogram of the time that method executions spent running. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.bulkhead.waitingDuration{method="<name>"} / (nanoseconds) | ft_bulkhead_waitingDuration_seconds_max{method="<name>",mp_scope="base"} ft_bulkhead_waitingDuration_seconds_count{method="<name>",mp_scope="base"} ft_bulkhead_waitingDuration_seconds_sum{method="<name>",mp_scope="base"} ft_bulkhead_waitingDuration_seconds{ method="<name>", mp_scope="base", quantile=["0.5"|"0.75"|"0.95"|"0.98"|"0.99"|"0.999"] } / seconds | A histogram of the time that method executions spent waiting in the queue. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.circuitbreaker.calls.total{ method="<name>", circuitBreakerResult=["success"|"failure"|"circuitBreakerOpen"] } | ft_circuitbreaker_calls_total{ method="<name>", mp_scope="base", circuitBreakerResult=["success"|"failure"|"circuitBreakerOpen"] } | The number of times that the circuit breaker logic was run. This number is usually once per method call, but might be more if the method call is retried. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.circuitbreaker.state.total{ method="<name>", state=["open"|"closed"|"halfOpen"] } / (nanoseconds) | ft_circuitbreaker_state_total_seconds{ method="<name>", mp_scope="base", state=["open"|"closed"|"halfOpen"] } / (seconds) | The amount of time that the circuit breaker has spent in each state. These values increase monotonically. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.circuitbreaker.opened.total{method="<name>"} | ft_circuitbreaker_opened_total{method="<name>",mp_scope="base"} | The number of times that the circuit breaker has moved from close state to open state. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.invocations.total{ method="<name>", result=["valueReturned"|"exceptionThrown"], fallback=["applied"|"notApplied"|"notDefined"] } | ft_invocations_total{ method="<name>", mp_scope="base", result=["valueReturned"|"exceptionThrown"], fallback=["applied"|"notApplied"|"notDefined"] } | The number of times that the method was called. This metric is a counter. | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.retry.calls.total{ method="<name>", retried=["true"|"false"], retryResult=["valueReturned" |"exceptionNotRetryable" |"maxRetriesReached" |"maxDurationReached"] } | ft_retry_calls_total{ method="<name>", mp_scope="base", retried=["true"|"false"], retryResult=["valueReturned" |"exceptionNotRetryable" |"maxRetriesReached" |"maxDurationReached"] } | The number of times that the retry logic was run. This will always be once per method call. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.retry.retries.total{method="<name>"} | ft_retry_retries_total{method="<name>",mp_scope="base"} | The number of times that the method was retried. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.timeout.calls.total{ method="<name>", timedOut=["true"|"false"] } | ft_timeout_calls_total{ method="<name>", mp_scope="base", timedOut=["true"|"false"] } | The number of times that the timeout logic was run. This number is usually once per method call, but it might be zero if a circuit breaker prevents execution or more than once per method call if the method call is retried. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.timeout.executionDuration{method="<name>"} / (nanoseconds) | ft_timeout_executionDuration_seconds_max{method="<name>",mp_scope="base"} ft_timeout_executionDuration_seconds_sum{method="<name>",mp_scope="base"} ft_timeout_executionDuration_seconds_count{method="<name>",mp_scope="base"} ft_timeout_executionDuration_seconds{ method="<name>", mp_scope="base", quantile=["0.5"|"0.75"|"0.95"|"0.98"|"0.99"|"0.999"] } / (seconds) | A histogram of the execution time for the method. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
gc.time{name=<gcName>} / (seconds) | gc_time_seconds{mp_scope="base",name="<gcType>"} / (seconds) | The approximate accumulated garbage collection elapsed time. This metric displays | Base metric | ||
gc.time.per.cycle | gc_time_per_cycle_seconds{mp_scope="vendor",name="<gcType>"} | The recent average time spent per garbage collection cycle. This metric displays | Base metric | ||
gc.total{name=<gcName>} | gc_total{mp_scope="base",name="<gcType>"} | The number of garbage collections that occurred. This metric displays | Base metric | ||
grpc.client.receivedMessages.total{grpc=<method_signature>} | grpc_client_receivedMessages_total{mp_scope="vendor"} | The number of stream messages received from the server. This metric is a counter. |
| ||
grpc.client.responseTime.total{grpc=<method_signature>} / (seconds) | grpc_client_responseTime_total_seconds{mp_scope="vendor"} / (seconds) | The response time of completed RPCs. This metric is a gauge. |
| ||
grpc.client.rpcCompleted.total{grpc=<method_signature>} | grpc_client_rpcCompleted_total{mp_scope="vendor"} | The number of RPCs completed on the client, regardless of success or failure. This metric is a counter. |
| ||
grpc.client.rpcStarted.total{grpc=<method_signature>} | grpc_client_rpcStarted_total{mp_scope="vendor"} | The number of RPCs started on the client. This metric is a counter. |
| ||
grpc.client.sentMessages.total{grpc=<method_signature>} | grpc_client_sentMessages_total{mp_scope="vendor"} | The number of stream messages sent by the client. This metric is a counter. |
| ||
grpc.server.receivedMessages.total{grpc=<service_name>} | grpc_server_receivedMessages_total{mp_scope="vendor"} | The number of stream messages received from the client. This metric is a counter. |
| ||
grpc.server.responseTime.total{grpc=<service_name>} / (seconds) | grpc_server_responseTime_total_seconds{mp_scope="vendor"} / (seconds) | The response time of completed RPCs. This metric is a gauge. |
| ||
grpc.server.rpcCompleted.total{grpc=<service_name>} | grpc_server_rpcCompleted_total{mp_scope="vendor"} | The number of RPCs completed on the server, regardless of success or failure. This metric is a counter. |
| ||
grpc.server.rpcStarted.total{grpc=<service_name>} | grpc_client_rpcStarted_total{mp_scope="vendor"} | The number of RPCs started on the server. This metric is a counter. |
| ||
grpc.server.sentMessages.total{grpc=<service_name>} | grpc_server_sentMessages_total{mp_scope="vendor"} | The number of stream messages sent by the server. This metric is a counter. |
| ||
http.server.request.duration | http_server_request_duration{error_type="<error_type>",http_request_method="<request_method>",http_response_status_code="<status_code>",http_route="<http_route>",mp_scope="vendor",network_protocol_version="<network_protocol_version>",server_address="<server_address>",server_port="<server_prot>",url_scheme="<url_scheme>"} | The duration of HTTP server requests. This metric is a timer. /(seconds) |
| ||
jaxws.client.checkedApplicationFaults.total{endpoint=<endpointName>} | jaxws_client_checkedApplicationFaults_total{endpoint="<endpointName>",mp_scope="vendor"} | The number of checked application faults. This metric is a counter. | N/A, always available | ||
jaxws.client.invocations.total{endpoint=<endpointName>} | jaxws_client_invocations_total{endpoint="<endpointName>",mp_scope="vendor"} | The number of invocations to this endpoint or operation. This metric is a counter. | N/A, always available | ||
jaxws.client.logicalRuntimeFaults.total{endpoint=<endpointName>} | jaxws_client_logicalRuntimeFaults_total{endpoint="<endpointName>",mp_scope="vendor"} | The number of logical runtime faults. This metric is a counter. | N/A, always available | ||
jaxws.client.responseTime.total{endpoint=<endpointName>} / (seconds) | jaxws_client_responseTime_total_seconds{endpoint="<endpointName>",mp_scope="vendor"} / (seconds) | The total response handling time since the start of the server. This metric is a gauge. | N/A, always available | ||
jaxws.client.runtimeFaults.total{endpoint=<endpointName>} | jaxws_client_runtimeFaults_total{endpoint="<endpointName>",mp_scope="vendor"} | The number of runtime faults. This metric is a counter. | N/A, always available | ||
jaxws.client.uncheckedApplicationFaults.total{endpoint=<endpointName>} | jaxws_client_uncheckedApplicationFaults_total{endpoint="<endpointName>",mp_scope="vendor"} | The number of unchecked application faults. This metric is a counter. | N/A, always available | ||
jaxws.server.checkedApplicationFaults.total{endpoint=<endpointName>} | jaxws_server_checkedApplicationFaults_total{endpoint="<endpointName>",mp_scope="vendor"} | The number of checked application faults. This metric is a counter. | N/A, always available | ||
jaxws.server.invocations.total{endpoint=<endpointName>} | jaxws_server_invocations_total{endpoint="<endpointName>",mp_scope="vendor"} | The number of invocations to this endpoint or operation. This metric is a counter. | N/A, always available | ||
jaxws.server.logicalRuntimeFaults.total{endpoint=<endpointName>} | jaxws_server_logicalRuntimeFaults_total{endpoint="<endpointName>",mp_scope="vendor"} | The number of logical runtime faults. This metric is a counter. | N/A, always available | ||
jaxws.server.responseTime.total{endpoint=<endpointName>} / (seconds) | jaxws_server_responseTime_total_seconds{endpoint="<endpointName>",mp_scope="vendor"} / (seconds) | The total response handling time since the start of the server. This metric is a gauge. | N/A, always available | ||
jaxws.server.runtimeFaults.total{endpoint=<endpointName>} | jaxws_server_runtimeFaults_total{endpoint="<endpointName>",mp_scope="vendor"} | The number of runtime faults. This metric is a counter. | N/A, always available | ||
jaxws.server.uncheckedApplicationFaults.total{endpoint=<endpointName>} | jaxws_server_uncheckedApplicationFaults_total{endpoint="<endpointName>",mp_scope="vendor"} | The number of unchecked application faults. This metric is a counter. | N/A, always available | ||
jvm.uptime / (seconds) | jvm_uptime_seconds{mp_scope="base"} / (seconds) | The time elapsed since the start of the JVM. This metric is a gauge. |
| ||
memory.committedHeap / (bytes) | memory_committedHeap_bytes{mp_scope="base"} / (bytes) | The amount of memory that is committed for the JVM to use. This metric is a gauge. | Base metric | ||
memory.heapUtilization | memory_heapUtilization_percent{mp_scope="vendor"} | The portion of the maximum heap memory that is currently in use. This metric displays | Base metric | ||
memory.maxHeap / (bytes) | memory_maxHeap_bytes{mp_scope="base"} / (bytes) | The maximum amount of heap memory that can be used for memory management. This metric displays | Base metric | ||
memory.usedHeap / (bytes) | memory_usedHeap_bytes{mp_scope="base"} / (bytes) | The amount of used heap memory. This metric is a gauge. | Base metric | ||
mp.messaging.message.count{channel=“<channelname>“} | mp_messaging_message_count{channel="<channelName>",mp_scope="base"} | The number of messages sent on the named channel. | Base metric, but available only when MP Metrics and MP Reactive Messaging features are enabled. | MicroProfile Metrics and MicroProfile Reactive Messaging 3.0 | |
requestTiming.activeRequestCount | requestTiming_activeRequestCount{mp_scope="vendor"} | The number of servlet requests that are currently running. This metric is a gauge. |
| MicroProfile Metrics 2.0 or later and Request timing | |
requestTiming.hungRequestCount | requestTiming_hungRequestCount{mp_scope="vendor"} | The number of servlet requests that are currently running but are hung. This metric is a gauge. |
| MicroProfile Metrics 2.0 or later and Request timing | |
requestTiming.requestCount | requestTiming_requestCount_total{mp_scope="vendor"} | The number of servlet requests since the server started. This metric is a counter. |
| MicroProfile Metrics 2.0 or later and Request timing | |
requestTiming.slowRequestCount | requestTiming_slowRequestCount{mp_scope="vendor"} | The number of servlet requests that are currently running but are slow. This metric is a gauge. |
| MicroProfile Metrics 2.0 or later and Request timing | |
REST.request | REST_request_seconds_max{class="<fully_qualified_class_name>",method="<method_signature>",mp_scope="base"} | The number of invocations and total response time of this RESTful resource method since the start of the server. The metric does not record the elapsed time nor count of a REST request if it resulted in an unmapped exception. Also tracks the highest recorded time duration and the 50th, 75th, 95th, 98th, 99th and 99.9th percentile. This metric is a timer. |
| MicroProfile Metrics 5.0 | |
REST.request.elapsedTime.per.request | REST_request_elapsedTime_per_request_seconds{class="<fully_qualified_class_name>",method="<method_signature>",mp_scope=”vendor"} | The recent average elapsed response time per RESTful resource method request. This metric is a gauge. |
| ||
REST.request.unmappedException.total | REST_request_unmappedException_total{class="<fully_qualified_class_name>",method="<method_signature>",mp_scope="base"} | The total number of unmapped exceptions that occur from this RESTful resource method since the server started. This metric is a counter. |
| ||
servlet.request.elapsedTime.per.request | servlet_request_elapsedTime_per_request_seconds{mp_scope="vendor",servlet="<servletName>"} | The recent average elapsed response time per servlet request. This metric is a gauge. |
| ||
servlet.request.total{servlet=<servletName>} | servlet_request_total{mp_scope="vendor",servlet="<servletName>"} | The total number of visits to this servlet since the start of the server. This metric is a counter. |
| ||
servlet.responseTime.total{servlet=<servletName>} / (seconds) | servlet_responseTime_total_seconds{mp_scope="vendor",servlet="<servletName>"} / (seconds) | The total of the servlet response time since the start of the server. This metric is a gauge. |
| ||
session.activeSessions{appname=<appName>} | session_activeSessions{appname="<appName>",mp_scope="vendor"} | The number of concurrently active sessions. A session is considered active if the application server is processing a request that uses that user session. This metric is a gauge. |
| ||
session.create.total{appname=<appName>} | session_create_total{appname="<appName>",mp_scope="vendor"} | The number of sessions that logged in since this metric was enabled. This metric is a gauge. |
| ||
session.invalidated.total{appname=<appName>} | session_invalidated_total{appname="<appName>",mp_scope="vendor"} | The number of sessions that logged out since this metric was enabled. This metric is a counter. |
| ||
session.invalidatedbyTimeout.total{appname=<appName>} | session_invalidatedbyTimeout_total{appname="<appName>",mp_scope="vendor"} | The number of sessions that logged out because of a timeout since this metric was enabled. This metric is a counter. |
| ||
session.liveSessions{appname=<appName>} | session_liveSessions{appname="<appName>",mp_scope="vendor"} | The number of users that are currently logged in. This metric is a gauge. |
| ||
thread.count | thread_count{mp_scope="base"} | The current number of live threads, including both daemon and non-daemon threads. This metric is a gauge. | Base metric | ||
thread.daemon.count | thread_daemon_count{mp_scope="base"} | The current number of live daemon threads. This metric is a gauge. | Base metric | ||
thread.max.count | thread_max_count{mp_scope="base"} | The peak live thread count since the JVM started or the peak was reset. This thread count includes both daemon and non-daemon threads. This metric is a gauge. | Base metric | ||
threadpool.activeThreads{pool=<poolName>} | threadpool_activeThreads{mp_scope="vendor",pool="<poolName>"} | The number of threads that are actively running tasks. This metric is a gauge. |
| ||
threadpool.size{pool=<poolName>} | threadpool_size{mp_scope="vendor",pool="<poolName>"} | The size of the thread pool. This metric is a gauge. |
|
MicroProfile Metrics 4.0 and earlier metrics reference
The following table lists and describes the metrics that are available for Open Liberty for MicroProfile Metrics 4.0 and earlier.
MicroProfile Metrics 4.0 name | MicroProfile Metrics 4.0 Prometheus name(s) | Type and description | Monitoring component | Features required | Version introduced |
---|---|---|---|---|---|
classloader.loadedClasses.count | base_classloader_loadedClasses_count | The number of classes that are currently loaded in the JVM. This metric is a gauge. | Base metric | ||
classloader.loadedClasses.total | base_classloader_loadedClasses_total | The total number of classes that were loaded since the JVM started. This metric is a counter. | Base metric | ||
classloader.unloadedClasses.total | base_classloader_unloadedClasses_total | The total number of classes that were unloaded since the JVM started. This metric is a counter. | Base metric | ||
connectionpool.connectionHandles{datasource=<datasourceName>} | vendor_connectionpool_connectionHandles{datasource="<datasourceName>"} | The number of connections that are in use. This number might include multiple connections that are shared from a single managed connection. This metric is a gauge. |
| ||
connectionpool.create.total{datasource=<datasourceName>} | vendor_connectionpool_create_total{datasource="<datasourceName>"} | The total number of managed connections that were created since the pool creation. This metric is a counter. |
| ||
connectionpool.destroy.total{datasource=<datasourceName>} | vendor_connectionpool_destroy_total{datasource="<datasourceName>"} | The total number of managed connections that were destroyed since the pool creation. This metric is a counter. |
| ||
connectionpool.freeConnections{datasource=<datasourceName>} | vendor_connectionpool_freeConnections{datasource="<datasourceName>"} | The number of managed connections in the free pool. This metric is a gauge. |
| ||
connectionpool.inUseTime.per.usedConnection | vendor_connectionpool_inUseTime_per_usedConnection_seconds{datasource="<datasourceName>"} | The recent average time that connections are in use. This metric is a gauge. |
| ||
connectionPool.inUseTime.total{datasource=<datasourceName>} / (milliseconds) | vendor_connectionpool_inUseTime_total_seconds{datasource="<datasourceName>"} / (seconds) | The total time that all connections are in-use since the start of the server. This metric is a gauge. |
| ||
connectionpool.managedConnections{datasource=<datasourceName>} | vendor_connectionpool_managedConnections{datasource="<datasourceName>"} | The current sum of managed connections in the free, shared, and unshared pools. This metric is a gauge. |
| ||
connectionpool.queuedRequests.total{datasource=<datasourceName>} | vendor_connectionpool_queuedRequests_total{datasource="<datasourceName>"} | The total number of connection requests that waited for a connection because of a full connection pool since the start of the server. This metric is a counter. |
| ||
connectionPool.usedConnections.total{datasource=<datasourceName>} | vendor_connectionpool_usedConnections_total{datasource="<datasourceName>"} | The total number of connection requests that waited because of a full connection pool or did not wait since the start of the server. Any connections that are currently in use are not included in this total. This metric is a counter. |
| ||
connectionpool.waitTime.per.queuedRequest | vendor_connectionpool_waitTime_per_queuedRequest_seconds{datasource="<datasourceName>"} | The recent average wait time for queued connection requests. This metric is a gauge. |
| ||
connectionpool.waitTime.total{datasource=<datasourceName>} / (milliseconds) | vendor_connectionpool_waitTime_total_seconds{datasource="<datasourceName>"} / (seconds) | The total wait time on all connection requests since the start of the server. This metric is a gauge. |
| ||
cpu.availableProcessors | base_cpu_availableProcessors | The number of processors available to the JVM. This metric is a gauge. | Base metric | ||
cpu.processCpuLoad / (percent) | base_cpu_processCpuLoad_percent / (percent) | The recent CPU usage for the JVM process. This metric is a gauge. | Base metric | ||
cpu.processCpuTime / (nanoseconds) | base_cpu_processCpuTime_seconds / (seconds) | The CPU time for the JVM process. This metric is a gauge. | Base metric | ||
cpu.processCpuUtilization | vendor_cpu_processCpuUtilization_percent | The recent CPU time that is used by the JVM process for all processors that are available to the JVM. The value is between | Base metric | ||
cpu.systemLoadAverage | base_cpu_systemLoadAverage | The system load average for the last minute. If the system load average is not available, a negative value is displayed. This metric is a gauge. | Base metric | ||
ft.bulkhead.calls.total{ method="<name>", bulkheadResult=["accepted"|"rejected"] } | base_ft_bulkhead_calls_total{ method="<name>", bulkheadResult=["accepted"|"rejected"] } | The number of times that the bulkhead logic was run. This number is usually once per method call, but it might be zero if a circuit breaker prevents execution or more than once per method call if the method call is retried. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.bulkhead.executionsRunning{method="<name>"} | base_ft_bulkhead_executionsRunning{method="<name>"} | The number of currently running executions. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.bulkhead.executionsWaiting{method="<name>"} | base_ft_bulkhead_executionsWaiting{method="<name>"} | The number of executions currently waiting in the queue. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.bulkhead.runningDuration{method="<name>"} / (nanoseconds) | base_ft_bulkhead_runningDuration_min_seconds{method="<name>"} base_ft_bulkhead_runningDuration_max_seconds{method="<name>"} base_ft_bulkhead_runningDuration_mean_seconds{method="<name>"} base_ft_bulkhead_runningDuration_stddev_seconds{method="<name>"} base_ft_bulkhead_runningDuration_seconds_count{method="<name>"} base_ft_bulkhead_runningDuration_seconds_sum{method="<name>"} base_ft_bulkhead_runningDuration_seconds{ method="<name>", quantile=["0.5"|"0.75"|"0.95"|"0.98"|"0.99"|"0.999"] } / seconds | A histogram of the time that method executions spent running. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.bulkhead.waitingDuration{method="<name>"} / (nanoseconds) | base_ft_bulkhead_waitingDuration_min_seconds{method="<name>"} base_ft_bulkhead_waitingDuration_max_seconds{method="<name>"} base_ft_bulkhead_waitingDuration_mean_seconds{method="<name>"} base_ft_bulkhead_waitingDuration_stddev_seconds{method="<name>"} base_ft_bulkhead_waitingDuration_seconds_count{method="<name>"} base_ft_bulkhead_waitingDuration_seconds_sum{method="<name>"} base_ft_bulkhead_waitingDuration_seconds{ method="<name>", quantile=["0.5"|"0.75"|"0.95"|"0.98"|"0.99"|"0.999"] } / seconds | A histogram of the time that method executions spent waiting in the queue. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.circuitbreaker.calls.total{ method="<name>", circuitBreakerResult=["success"|"failure"|"circuitBreakerOpen"] } | base_ft_circuitbreaker_calls_total{ method="<name>", circuitBreakerResult=["success"|"failure"|"circuitBreakerOpen"] } | The number of times that the circuit breaker logic was run. This number is usually once per method call, but might be more if the method call is retried. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.circuitbreaker.state.total{ method="<name>", state=["open"|"closed"|"halfOpen"] } / (nanoseconds) | base_ft_circuitbreaker_state_total_seconds{ method="<name>", state=["open"|"closed"|"halfOpen"] } / (seconds) | The amount of time that the circuit breaker has spent in each state. These values increase monotonically. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.circuitbreaker.opened.total{method="<name>"} | base_ft_circuitbreaker_opened_total{method="<name>"} | The number of times that the circuit breaker has moved from close state to open state. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.invocations.total{ method="<name>", result=["valueReturned"|"exceptionThrown"], fallback=["applied"|"notApplied"|"notDefined"] } | base_ft_invocations_total{ method="<name>", result=["valueReturned"|"exceptionThrown"], fallback=["applied"|"notApplied"|"notDefined"] } | The number of times that the method was called. This metric is a counter. | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.retry.calls.total{ method="<name>", retried=["true"|"false"], retryResult=["valueReturned" |"exceptionNotRetryable" |"maxRetriesReached" |"maxDurationReached"] } | base_ft_retry_calls_total{ method="<name>", retried=["true"|"false"], retryResult=["valueReturned" |"exceptionNotRetryable" |"maxRetriesReached" |"maxDurationReached"] } | The number of times that the retry logic was run. This will always be once per method call. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.retry.retries.total{method="<name>"} | base_ft_retry_retries_total{method="<name>"} | The number of times that the method was retried. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.timeout.calls.total{ method="<name>", timedOut=["true"|"false"] } | base_ft_timeout_calls_total{ method="<name>", timedOut=["true"|"false"] } | The number of times that the timeout logic was run. This number is usually once per method call, but it might be zero if a circuit breaker prevents execution or more than once per method call if the method call is retried. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
ft.timeout.executionDuration{method="<name>"} / (nanoseconds) | base_ft_timeout_executionDuration_mean_seconds{method="<name>"} base_ft_timeout_executionDuration_max_seconds{method="<name>"} base_ft_timeout_executionDuration_min_seconds{method="<name>"} base_ft_timeout_executionDuration_stddev_seconds{method="<name>"} base_ft_timeout_executionDuration_seconds_count{method="<name>"} base_ft_timeout_executionDuration_seconds{ method="<name>", quantile=["0.5"|"0.75"|"0.95"|"0.98"|"0.99"|"0.999"] } / (seconds) | A histogram of the execution time for the method. This metric is available when you use the | Base metric, but available only when MP Fault Tolerance feature is enabled. | ||
gc.time{name=<gcName>} / (milliseconds) | base_gc_time_seconds{name="<gcType>"} / (seconds) | The approximate accumulated garbage collection elapsed time. This metric displays | Base metric | ||
gc.time.per.cycle | vendor_gc_time_per_cycle_seconds{name="<gcType>"} | The recent average time spent per garbage collection cycle. This metric displays | Base metric | ||
gc.total{name=<gcName>} | base_gc_total{name="<gcType>"} | The number of garbage collections that occurred. This metric displays | Base metric | ||
grpc.client.receivedMessages.total{grpc=<method_signature>} | vendor_grpc_client_receivedMessages_total | The number of stream messages received from the server. This metric is a counter. |
| ||
grpc.client.responseTime.total{grpc=<method_signature>} / (milliseconds) | vendor_grpc_client_responseTime_total_seconds / (seconds) | The response time of completed RPCs. This metric is a gauge. |
| ||
grpc.client.rpcCompleted.total{grpc=<method_signature>} | vendor_grpc_client_rpcCompleted_total | The number of RPCs completed on the client, regardless of success or failure. This metric is a counter. |
| ||
grpc.client.rpcStarted.total{grpc=<method_signature>} | vendor_grpc_client_rpcStarted_total | The number of RPCs started on the client. This metric is a counter. |
| ||
grpc.client.sentMessages.total{grpc=<method_signature>} | vendor_grpc_client_sentMessages_total | The number of stream messages sent by the client. This metric is a counter. |
| ||
grpc.server.receivedMessages.total{grpc=<service_name>} | vendor_grpc_server_receivedMessages_total | The number of stream messages received from the client. This metric is a counter. |
| ||
grpc.server.responseTime.total{grpc=<service_name>} / (milliseconds) | vendor_grpc_server_responseTime_total_seconds / (seconds) | The response time of completed RPCs. This metric is a gauge. |
| ||
grpc.server.rpcCompleted.total{grpc=<service_name>} | vendor_grpc_server_rpcCompleted_total | The number of RPCs completed on the server, regardless of success or failure. This metric is a counter. |
| ||
grpc.server.rpcStarted.total{grpc=<service_name>} | vendor_grpc_client_rpcStarted_total | The number of RPCs started on the server. This metric is a counter. |
| ||
grpc.server.sentMessages.total{grpc=<service_name>} | vendor_grpc_server_sentMessages_total | The number of stream messages sent by the server. This metric is a counter. |
| ||
jaxws.client.checkedApplicationFaults.total{endpoint=<endpointName>} | vendor_jaxws_client_checkedApplicationFaults_total{endpoint="<endpointName>"} | The number of checked application faults. This metric is a counter. | N/A, always available | ||
jaxws.client.invocations.total{endpoint=<endpointName>} | vendor_jaxws_client_invocations_total{endpoint="<endpointName>"} | The number of invocations to this endpoint or operation. This metric is a counter. | N/A, always available | ||
jaxws.client.logicalRuntimeFaults.total{endpoint=<endpointName>} | vendor_jaxws_client_logicalRuntimeFaults_total{endpoint="<endpointName>"} | The number of logical runtime faults. This metric is a counter. | N/A, always available | ||
jaxws.client.responseTime.total{endpoint=<endpointName>} / (milliseconds) | vendor_jaxws_client_responseTime_total_seconds{endpoint="<endpointName>"} / (seconds) | The total response handling time since the start of the server. This metric is a gauge. | N/A, always available | ||
jaxws.client.runtimeFaults.total{endpoint=<endpointName>} | vendor_jaxws_client_runtimeFaults_total{endpoint="<endpointName>"} | The number of runtime faults. This metric is a counter. | N/A, always available | ||
jaxws.client.uncheckedApplicationFaults.total{endpoint=<endpointName>} | vendor_jaxws_client_uncheckedApplicationFaults_total{endpoint="<endpointName>"} | The number of unchecked application faults. This metric is a counter. | N/A, always available | ||
jaxws.server.checkedApplicationFaults.total{endpoint=<endpointName>} | vendor_jaxws_server_checkedApplicationFaults_total{endpoint="<endpointName>"} | The number of checked application faults. This metric is a counter. | N/A, always available | ||
jaxws.server.invocations.total{endpoint=<endpointName>} | vendor_jaxws_server_invocations_total{endpoint="<endpointName>"} | The number of invocations to this endpoint or operation. This metric is a counter. | N/A, always available | ||
jaxws.server.logicalRuntimeFaults.total{endpoint=<endpointName>} | vendor_jaxws_server_logicalRuntimeFaults_total{endpoint="<endpointName>"} | The number of logical runtime faults. This metric is a counter. | N/A, always available | ||
jaxws.server.responseTime.total{endpoint=<endpointName>} / (milliseconds) | vendor_jaxws_server_responseTime_total_seconds{endpoint="<endpointName>"} / (seconds) | The total response handling time since the start of the server. This metric is a gauge. | N/A, always available | ||
jaxws.server.runtimeFaults.total{endpoint=<endpointName>} | vendor_jaxws_server_runtimeFaults_total{endpoint="<endpointName>"} | The number of runtime faults. This metric is a counter. | N/A, always available | ||
jaxws.server.uncheckedApplicationFaults.total{endpoint=<endpointName>} | vendor_jaxws_server_uncheckedApplicationFaults_total{endpoint="<endpointName>"} | The number of unchecked application faults. This metric is a counter. | N/A, always available | ||
jvm.uptime / (milliseconds) | base_jvm_uptime_seconds / (seconds) | The time elapsed since the start of the JVM. This metric is a gauge. |
| ||
memory.committedHeap / (bytes) | base_memory_committedHeap_bytes / (bytes) | The amount of memory that is committed for the JVM to use. This metric is a gauge. | Base metric | ||
memory.heapUtilization | vendor_memory_heapUtilization_percent | The portion of the maximum heap memory that is currently in use. This metric displays | Base metric | ||
memory.maxHeap / (bytes) | base_memory_maxHeap_bytes / (bytes) | The maximum amount of heap memory that can be used for memory management. This metric displays | Base metric | ||
memory.usedHeap / (bytes) | base_memory_usedHeap_bytes / (bytes) | The amount of used heap memory. This metric is a gauge. | Base metric | ||
mp.messaging.message.count{channel=“<channelname>“} | mp_messaging_message_count{channel="<channelName>",mp_scope="base"} | The number of messages sent on the named channel. | Base metric, but available only when MP Metrics and MP Reactive Messaging features are enabled. | MicroProfile Metrics and MicroProfile Reactive Messaging 3.0 | |
requestTiming.activeRequestCount | vendor_requestTiming_activeRequestCount | The number of servlet requests that are currently running. This metric is a gauge. |
| MicroProfile Metrics 2.0 or later and Request timing | |
requestTiming.hungRequestCount | vendor_requestTiming_hungRequestCount | The number of servlet requests that are currently running but are hung. This metric is a gauge. |
| MicroProfile Metrics 2.0 or later and Request timing | |
requestTiming.requestCount | vendor_requestTiming_requestCount_total | The number of servlet requests since the server started. This metric is a counter. |
| MicroProfile Metrics 2.0 or later and Request timing | |
requestTiming.slowRequestCount | vendor_requestTiming_slowRequestCount | The number of servlet requests that are currently running but are slow. This metric is a gauge. |
| MicroProfile Metrics 2.0 or later and Request timing | |
REST.request | base_REST_request_total{class="<fully_qualified_class_name>",method="<method_signature>"} | The number of invocations and total response time of this RESTful resource method since the server started. The metric doesn’t record the count of invocations nor the elapsed time if an unmapped exception occurs. This metric also tracks the highest recorded time duration within the previous completed full minute and lowest recorded time duration within the previous completed full minute. This metric is a simple timer. |
| ||
REST.request.elapsedTime.per.request | vendor_REST_request_elapsedTime_per_request_seconds{class="<fully_qualified_class_name>",method="<method_signature>"} | The recent average elapsed response time per RESTful resource method request. This metric is a gauge. |
| ||
REST.request.unmappedException.total | base_REST_request_unmappedException_total{class="<fully_qualified_class_name>",method="<method_signature>"} | The total number of unmapped exceptions that occur from this RESTful resource method since the server started. This metric is a counter. |
| ||
servlet.request.elapsedTime.per.request | vendor_servlet_request_elapsedTime_per_request_seconds{servlet="<servletName>"} | The recent average elapsed response time per servlet request. This metric is a gauge. |
| ||
servlet.request.total{servlet=<servletName>} | vendor_servlet_request_total{servlet="<servletName>"} | The total number of visits to this servlet since the start of the server. This metric is a counter. |
| ||
servlet.responseTime.total{servlet=<servletName>} / (nanoseconds) | vendor_servlet_responseTime_total_seconds{servlet="<servletName>"} / (seconds) | The total of the servlet response time since the start of the server. This metric is a gauge. |
| ||
session.activeSessions{appname=<appName>} | vendor_session_activeSessions{appname="<appName>"} | The number of concurrently active sessions. A session is considered active if the application server is processing a request that uses that user session. This metric is a gauge. |
| ||
session.create.total{appname=<appName>} | vendor_session_create_total{appname="<appName>"} | The number of sessions that logged in since this metric was enabled. This metric is a gauge. |
| ||
session.invalidated.total{appname=<appName>} | vendor_session_invalidated_total{appname="<appName>"} | The number of sessions that logged out since this metric was enabled. This metric is a counter. |
| ||
session.invalidatedbyTimeout.total{appname=<appName>} | vendor_session_invalidatedbyTimeout_total{appname="<appName>"} | The number of sessions that logged out because of a timeout since this metric was enabled. This metric is a counter. |
| ||
session.liveSessions{appname=<appName>} | vendor_session_liveSessions{appname="<appName>"} | The number of users that are currently logged in. This metric is a gauge. |
| ||
thread.count | base_thread_count | The current number of live threads, including both daemon and non-daemon threads. This metric is a gauge. | Base metric | ||
thread.daemon.count | base_thread_daemon_count | The current number of live daemon threads. This metric is a gauge. | Base metric | ||
thread.max.count | base_thread_max_count | The peak live thread count since the JVM started or the peak was reset. This thread count includes both daemon and non-daemon threads. This metric is a gauge. | Base metric | ||
threadpool.activeThreads{pool=<poolName>} | vendor_threadpool_activeThreads{pool="<poolName>"} | The number of threads that are actively running tasks. This metric is a gauge. |
| ||
threadpool.size{pool=<poolName>} | vendor_threadpool_size{pool="<poolName>"} | The size of the thread pool. This metric is a gauge. |
|