Metrics
In the context of MLOps, tracking and monitoring metrics is critical for assessing the performance and progress of machine learning pipelines. The CraftAiSdk platform provides a comprehensive set of features for defining and recording metrics at each pipeline execution.
With measurement capabilities, you can efficiently track and retrieve the metrics associated with each execution in your machine learning pipelines. This enables you to valuable insights and make informed decisions about your models and deployments.
Pipeline Metrics
The record_metric_value function allows you to create or update a pipeline metric within a pipeline code. This function allows you to store the name and corresponding value of a particular metric.
You do not need to declare anything outside of the pipeline, you can just use record_metrics_value() in your pipeline code. Remember, if you want to use the SDK in your pipeline code, you don't need to specify your environment URL or token in the builder parameters.
After the execution is finished, you can find all your metric values in the web interface on the Execution page and on the Metrics tab.
Upload Metrics
Currently, pipeline metrics can only have one numeric value and one name for each execution metric. If multiple metrics are entered with identical names, only the last metric will be retained.
Warning
This function can only be used in the source code of the pipeline running on the platform. When used outside a code pipeline, it doesn't send metrics and displays a warning message.
Parameters
name
(str) - The name of the metric to store.value
(float) - The value of the metric to store.
Returns
True if sent, False otherwise
Example
Here is a very simple example of pipeline code that sends only 2 different metrics.
Note
Don't forget to import the craft-ai-sdk package in the pipeline code and to list the library in your requirement.txt
to install it on the pipeline execution context.
from craft_ai_sdk import CraftAiSdk
def metricsPipeline () :
sdk = CraftAiSdk()
# Some code
sdk.record_metric_value("accuracy", 0.1409)
sdk.record_metric_value("loss", 1/3)
print ("Metrics are sent")
Get metrics
The get_metrics function retrieves a list of pipeline metrics. You can filter the metrics based on the name, pipeline name, deployment name, or execution ID. It's important to note that only one of the parameters (name, pipeline_name, deployment_name, execution_id) can be set at a time.
Parameters
name
(str, optional) - The name of the metric to retrieve.pipeline_name
(str, optional) - Filter metrics by pipeline. If not specified, all pipelines will be considered.deployment_name
(str, optional) - Filter metrics by deployment. If not specified, all deployments will be considered.execution_id
(str, optional) - Filter metrics by execution. If not specified, all executions will be considered.
Returns
The function returns a list of execution metrics as dictionaries. Each
metric entry contains the following keys: name
, value
,
created_at
, execution_id
, deployment_name
,
pipeline_name
.
List Metrics
The Craft AI platform provides robust features for defining and recording list metrics during pipeline execution. This functionality allows you to store the name and corresponding list of values for a specific metric.
To create or update a list metric within a pipeline code, you can utilize the record_list_metric_values() function. Afterwards, you can retrieve your metrics outside the pipeline using the get_list_metrics() function. Additionally, you can access all your metric values in the web interface via the Metrics tab on the Execution page.
Similar to pipeline metrics, list metrics can only consist of a list of numbers (integer or float).
Upload list metrics
The record_list_metric_values() function enables you to add values to a metric list by specifying the name of the metric list and the corresponding values. There is no need to declare anything outside of the pipeline; simply use record_list_metric_values() in your pipeline code, as you would for pipeline metrics.
It's important to note that when using the record_list_metric_values() function, it can only be utilized within the source code of the pipeline running on the platform. When uploading list metrics, you have the option to either specify a Python list directly or upload values individually, specifying the same metric name (which will automatically accumulate into a list).
Here is an example of pipeline code that sends two different lists metrics:
Warning
This function can only be used in the source code of the pipeline running on the platform. When used outside of a code pipeline, it doesn't send metrics and displays a warning message.
Parameters
name
(str) - Name of the metric list to add values.values
(list of float or float) - Values of the metric list to add.
Returns
This function returns nothing (None).
Example
Here is a very simple example of pipeline code that sends only 2 different lists metrics.
Note
Don't forget to import the craft-ai-sdk package in the pipeline code and to list the library in your requirement.txt
to install it on the pipeline execution context.
from craft_ai_sdk import CraftAiSdk
import math
def metricsPipeline():
sdk = CraftAiSdk()
# Some code
# Just one list upload
sdk.record_list_metric_values("accuracy_list", [0.89, 0.92, 0.95])
# Tow list upload, the lists will be concatenated in loss_list list metrics
sdk.record_list_metric_values("loss_list", [1.4, 1.2])
sdk.record_list_metric_values("loss_list", [1.1, 1.0])
# Upload multiple values that will be concatenated into 1 metrics list *logx* with all values
for i in range (1, 50) :
sdk.record_list_metric_values("logx", math.log(i))
print("List metrics are sent")
Warning
A pipeline metrics and a list metrics can have the same name in the same execution. A metrics list is limited to a maximum of 50,000 values per execution.
Get list metrics
To retrieve a list of metric lists, you can use the get_list_metrics() function. This function allows you to filter the metric lists based on the name, pipeline name, deployment name, or execution ID.
It's important to note that only one of the parameters (name, pipeline_name, deployment_name, execution_id) can be set at a time.
Parameters
name
(str, optional) - Name of the metric list to retrieve.pipeline_name
(str, optional) - Filter metric lists by pipeline, defaults to all the pipelines.deployment_name
(str, optional) - Filter metric lists by deployment, defaults to all the deployments.execution_id
(str, optional) - Filter metric lists by execution, defaults to all the executions.
Returns
The function returns a list of execution metrics as dictionaries. Each
metric entry contains the following keys: name
, value
,
created_at
, execution_id
, deployment_name
,
pipeline_name
.
Here is an example of how to use the get_list_metrics() function: