Skip to content

Execute a pipeline

An execution of a pipeline creates an execution on the platform. Each execution is associated with a pipeline with the definition of the values of its inputs and outputs. The execution triggers the execution of the pipeline on one or more Kubernetes containers using the computational resources available on the environment. All the results and artifacts of the execution can be retrieved in the Execution Tracking tab.

There are two ways to execute a pipeline:

  • by creating a deployment: the execution will then depend on the selected execution rule and will be performed when the execution condition is met (call for an endpoint, periodicity for a CRON, etc...)
  • by running it instantly with the sdk: It is then necessary to indicate the values for each input of the pipeline.

Summary

  1. Run a pipeline
  2. Trigger a deployment with execution rule by endpoint with SDK Craft AI
  3. Trigger a deployment with execution rule by endpoint with request
  4. Get result of a past execution
Function name Method Return type Description
run_pipeline run_pipeline(pipeline_name, inputs=None, inputs_mapping=None, outputs_mapping=None) dict Executes the pipeline on the platform.
retrieve_endpoint_results retrieve_endpoint_results(endpoint_name, execution_id, endpoint_token) dict Get result of endpoint execution.

Run a pipeline

A run is an execution of a pipeline on the platform. SDK function that runs a pipeline to create an execution.

run_pipeline(pipeline_name, inputs=None, inputs_mapping=None, outputs_mapping=None)

Parameters

  • pipeline_name (str) -- Name of an existing pipeline.
  • inputs (dict, optional) - Dictionary of inputs to pass to the pipeline with input names as dict keys and corresponding values as dict values. For files, the value should be the path to the file or a file content as an instance of io.IOBase. Defaults to None.
  • inputs_mapping (list of instances of [InputSource]{.title-ref}) - List of input mappings, to map pipeline inputs to different sources (such as environment variables). See [InputSource]{.title-ref} for more details.
  • outputs_mapping (list of instances of [OutputDestination]{.title-ref}) - List of output mappings, to map pipeline outputs to different destinations (such as datastore). See [OutputDestination]{.title-ref} for more details.

Returns

Created pipeline execution represented as dict with execution_id and outputs as keys. The output values will be in the output object represented as dict with output_names as keys and corresponding values as values.

Example of return object :

{
  "execution_id": "my-pipeline-8iud6",
  "outputs": {
      "output_number": 0.117,
      "output_text": "This is working fine",
   }
}

Trigger a deployment with execution rule by endpoint with SDK Craft AI

SDK function that triggers the deployment of our pipeline.

sdk.trigger_endpoint(endpoint_name, endpoint_token, inputs={},
wait_for_results=True)

Parameters

  • endpoint_name (str) -- Name of the endpoint.
  • endpoint_token (str) -- Token to access endpoint.
  • inputs (dict) - Inputs value for endpoint call.
  • wait_for_results (bool, optional) -- Automatically call retrieve_endpoint_results (True by default)

Returns

Created pipeline execution represented as dict.

Trigger a deployment with execution rule by endpoint with request

For trigger a deployment who is set up with an endpoint, you can also send request with your element defined in the pipeline input.

Examples in Python for variable :

import requests

r = requests.post(
    "https://your_environment_url/my_endpoint",
    json={
        "input1": "value1",
        "input2": [1,2,3]
        "input3": False
    },
    headers={"Authorization": "EndpointToken " + ENDPOINT_TOKEN }
)

Examples in Python for file (not available with auto mapping) :

import requests

r = requests.post(
    "https://your_environment_url/my_multistep_endpoint",
    files={"data": open("my_file.txt", "rb")},
    headers={"Authorization": "EndpointToken " + ENDPOINT_TOKEN }
)

Note

We have explained in this documentation how to trigger the endpoint with

Python, but you can obviously send a request from any tool (curl, postman, JavaScript, ...).

Warning

Inputs and outputs have size limits. This limit is 0.06MB for cumulative inputs and also 0.06MB for cumulative outputs. This input/output size limit is available for all trigger/deployment types (run, endpoint or CRON). This limit applies regardless of the source or destination of the input/output.

Only file inputs/outputs are not affected by this limit. We recommend that you use this method when transferring large amounts of data.

Get result of a past execution

Get the results of an endpoint execution.

CraftAiSdk.retrieve_endpoint_results(endpoint_name, execution_id, endpoint_token)

Parameters

  • endpoint_name (str) - Name of the endpoint.
  • execution_id (str) - Name of the execution returned by trigger_endpoint.
  • endpoint_token (str) - Token to access endpoint.

Returns

Created pipeline execution represented as dict with the following keys:

  • outputs (dict): Dictionary of outputs of the pipeline with output names as keys and corresponding values as values.

Get all execution of a pipeline

Get a list of executions for the given pipeline

CraftAiSdk.list_pipeline_executions(pipeline_name)

Parameters

  • pipeline_name (str) - Name of an existing pipeline.

Returns

A list of information on the pipeline execution represented as dict with the following keys:

  • execution_id (str): Name of the pipeline execution.
  • status (str): Status of the pipeline execution.
  • created_at (str): Date of creation of the pipeline execution.
  • created_by (str): ID of the user who created the pipeline execution. In the case of a pipeline run, this is the user who triggered the run. In the case of an execution via a deployment, this is the user who created the deployment.
  • end_date (str): Date of completion of the pipeline execution.
  • pipeline_name (str): Name of the pipeline used for the execution.
  • deployment_name (str): Name of the deployment used for the execution.
  • steps (list of obj): List of the step executions represented as dict with the following keys:
    • name (str): Name of the step.
    • status (str): Status of the step.
    • start_date (str): Date of start of the step execution.
    • end_date (str): Date of completion of the step execution.
    • commit_id (str): Id of the commit used to build the step.
    • repository_url (str): Url of the repository used to build the step.
    • repository_branch (str): Branch of the repository used to build the step.
    • requirements_path (str): Path of the requirements.txt file.
    • origin (str): The origin of the step, can be git_repository or local.