200
Pipeline successfully updated Schema — OPTIONAL |
---|
created_at date-time | deployment_desired_status stringPossible values: [Shutdown , Paused , Running ] | deployment_error object — OPTIONALInformation returned by REST API endpoints on error. details objectDetailed error metadata.
The contents of this field is determined by error_code . | error_code stringError code is a string that specifies this error type. | message stringHuman-readable error message. |
| deployment_status stringPossible values: [Shutdown , Provisioning , Initializing , Paused , Running , Unavailable , Failed , ShuttingDown ] Pipeline status. This type represents the state of the pipeline tracked by the pipeline
runner and observed by the API client via the GET /v0/pipelines/{name} endpoint. The lifecycle of a pipelineThe following automaton captures the lifecycle of the pipeline.
Individual states and transitions of the automaton are described below.
-
States labeled with the hourglass symbol (⌛) are timed states. The
automaton stays in timed state until the corresponding operation completes
or until it transitions to become failed after the pre-defined timeout
period expires.
-
State transitions labeled with API endpoint names (/start , /pause ,
/shutdown ) are triggered by invoking corresponding endpoint,
e.g., POST /v0/pipelines/{name}/start . Note that these only express
desired state, and are applied asynchronously by the automata.
Shutdown◄────────────────────┐ │ │ /start or /pause│ ShuttingDown ◄── ──── Failed │ ▲ ▲ ▼ /shutdown │ │ ⌛Provisioning ──────────────────┤ Shutdown, Provisioning, │ │ Initializing, Paused, │ │ Running, Unavailable ▼ │ (all states except ShuttingDown ⌛Initializing ──────────────────┤ can transition to Failed) │ │ ┌─────────┼────────────────────────┴─┐ │ ▼ │ │ Paused ◄──────► Unavailable │ │ │ ▲ ▲ │ │ /start│ │/pause │ │ │ ▼ │ │ │ │ Running ◄──────────────┘ │ └────────────────────────────────────┘
Desired and actual statusWe use the desired state model to manage the lifecycle of a pipeline.
In this model, the pipeline has two status attributes associated with
it at runtime: the desired status, which represents what the user
would like the pipeline to do, and the current status, which
represents the actual state of the pipeline. The pipeline runner
service continuously monitors both fields and steers the pipeline
towards the desired state specified by the user.
Only three of the states in the pipeline automaton above can be
used as desired statuses: Paused , Running , and Shutdown .
These statuses are selected by invoking REST endpoints shown
in the diagram. The user can monitor the current state of the pipeline via the
GET /v0/pipelines/{name} endpoint. In a typical scenario,
the user first sets the desired state, e.g., by invoking the
/start endpoint, and then polls the GET /v0/pipelines/{name}
endpoint to monitor the actual status of the pipeline until its
deployment_status attribute changes to Running indicating
that the pipeline has been successfully initialized and is
processing data, or Failed , indicating an error. | deployment_status_since date-time | description string | id uuid | name string | platform_version string | program_code string | program_config objectcache boolean — OPTIONALIf true (default), when a prior compilation with the same checksum
already exists, the output of that (i.e., binary) is used.
Set false to always trigger a new compilation, which might take longer
and as well can result in overriding an existing binary. | profile string — OPTIONALPossible values: [dev , unoptimized , optimized ] Enumeration of possible compilation profiles that can be passed to the Rust compiler
as an argument via cargo build --profile <> . A compilation profile affects among
other things the compilation speed (how long till the program is ready to be run)
and runtime speed (the performance while running). |
| program_info object — OPTIONALProgram information is the output of the SQL compiler. It includes information needed for Rust compilation (e.g., generated Rust code)
as well as only for runtime (e.g., schema, input/output connectors). input_connectors objectInput connectors derived from the schema. | main_rust string — OPTIONALGenerated main program Rust code: main.rs | output_connectors objectOutput connectors derived from the schema. | schema objectA struct containing the tables (inputs) and views for a program. Parse from the JSON data-type of the DDL generated by the SQL compiler. inputs object[]case_sensitive boolean | name string | fields object[]case_sensitive boolean | name string | columntype (circular) | default string — OPTIONAL | lateness string — OPTIONAL | watermark string — OPTIONAL |
| materialized boolean — OPTIONAL | properties object — OPTIONAL |
| outputs object[]case_sensitive boolean | name string | fields object[]case_sensitive boolean | name string | columntype (circular) | default string — OPTIONAL | lateness string — OPTIONAL | watermark string — OPTIONAL |
| materialized boolean — OPTIONAL | properties object — OPTIONAL |
|
| udf_stubs string — OPTIONALGenerated user defined function (UDF) stubs Rust code: stubs.rs |
| program_status Program compilation status. | program_status_since date-time | program_version int64 | runtime_config objectGlobal pipeline configuration settings. This is the publicly
exposed type for users to configure pipelines. clock_resolution_usecs int64 — OPTIONALReal-time clock resolution in microseconds. This parameter controls the execution of queries that use the NOW() function. The output of such
queries depends on the real-time clock and can change over time without any external
inputs. The pipeline will update the clock value and trigger incremental recomputation
at most each clock_resolution_usecs microseconds. It is set to 100 milliseconds (100,000 microseconds) by default. Set to null to disable periodic clock updates. | cpu_profiler boolean — OPTIONALEnable CPU profiler. The default value is true . | fault_tolerance object — OPTIONALFault-tolerance configuration for runtime startup. checkpoint_interval_secs int64 — OPTIONALInterval between automatic checkpoints, in seconds. The default is 60 seconds. A value of 0 disables automatic
checkpointing. |
| max_buffering_delay_usecs int64 — OPTIONALMaximal delay in microseconds to wait for min_batch_size_records to
get buffered by the controller, defaults to 0. | min_batch_size_records int64 — OPTIONALMinimal input batch size. The controller delays pushing input records to the circuit until at
least min_batch_size_records records have been received (total
across all endpoints) or max_buffering_delay_usecs microseconds
have passed since at least one input records has been buffered.
Defaults to 0. | min_storage_bytes integer — OPTIONALThe minimum estimated number of bytes in a batch of data to write it to
storage. This is provided for debugging and fine-tuning and should
ordinarily be left unset. It only has an effect when storage is set to
true. A value of 0 will write even empty batches to storage, and nonzero
values provide a threshold. usize::MAX would effectively disable
storage. | resources object — OPTIONALcpu_cores_max int64 — OPTIONALThe maximum number of CPU cores to reserve
for an instance of this pipeline | cpu_cores_min int64 — OPTIONALThe minimum number of CPU cores to reserve
for an instance of this pipeline | memory_mb_max int64 — OPTIONALThe maximum memory in Megabytes to reserve
for an instance of this pipeline | memory_mb_min int64 — OPTIONALThe minimum memory in Megabytes to reserve
for an instance of this pipeline | storage_class string — OPTIONALStorage class to use for an instance of this pipeline.
The class determines storage performance such as IOPS and throughput. | storage_mb_max int64 — OPTIONALThe total storage in Megabytes to reserve
for an instance of this pipeline |
| storage boolean — OPTIONALShould storage be enabled for this pipeline?
-
If false (default), the pipeline's state is kept in in-memory
data-structures. This is useful if the pipeline's state will fit in
memory and if the pipeline is ephemeral and does not need to be
recovered after a restart. The pipeline will most likely run faster
since it does not need to access storage.
-
If true , the pipeline's state is kept on storage. This allows the
pipeline to work with state that will not fit into memory. It also
allows the state to be checkpointed and recovered across restarts.
This feature is currently experimental.
| tracing boolean — OPTIONAL | tracing_endpoint_jaeger string — OPTIONALJaeger tracing endpoint to send tracing information to. | workers int32 — OPTIONALNumber of DBSP worker threads. |
| udf_rust string | udf_toml string | version int64 |
|
201
Pipeline successfully created Schema — OPTIONAL |
---|
created_at date-time | deployment_desired_status stringPossible values: [Shutdown , Paused , Running ] | deployment_error object — OPTIONALInformation returned by REST API endpoints on error. details objectDetailed error metadata.
The contents of this field is determined by error_code . | error_code stringError code is a string that specifies this error type. | message stringHuman-readable error message. |
| deployment_status stringPossible values: [Shutdown , Provisioning , Initializing , Paused , Running , Unavailable , Failed , ShuttingDown ] Pipeline status. This type represents the state of the pipeline tracked by the pipeline
runner and observed by the API client via the GET /v0/pipelines/{name} endpoint. The lifecycle of a pipelineThe following automaton captures the lifecycle of the pipeline.
Individual states and transitions of the automaton are described below.
-
States labeled with the hourglass symbol (⌛) are timed states. The
automaton stays in timed state until the corresponding operation completes
or until it transitions to become failed after the pre-defined timeout
period expires.
-
State transitions labeled with API endpoint names (/start , /pause ,
/shutdown ) are triggered by invoking corresponding endpoint,
e.g., POST /v0/pipelines/{name}/start . Note that these only express
desired state, and are applied asynchronously by the automata.
Shutdown◄────────────────────┐ │ │ /start or /pause│ ShuttingDown ◄────── Failed │ ▲ ▲ ▼ /shutdown │ │ ⌛Provisioning ──────────────────┤ Shutdown, Provisioning, │ │ Initializing, Paused, │ │ Running, Unavailable ▼ │ (all states except ShuttingDown ⌛Initializing ──────────────────┤ can transition to Failed) │ │ ┌─────────┼────────────────────────┴─┐ │ ▼ │ │ Paused ◄──────► Unavailable │ │ │ ▲ ▲ │ │ /start│ │/pause │ │ │ ▼ │ │ │ │ Running ◄──────────────┘ │ └────────────────────────────────────┘
Desired and actual statusWe use the desired state model to manage the lifecycle of a pipeline.
In this model, the pipeline has two status attributes associated with
it at runtime: the desired status, which represents what the user
would like the pipeline to do, and the current status, which
represents the actual state of the pipeline. The pipeline runner
service continuously monitors both fields and steers the pipeline
towards the desired state specified by the user.
Only three of the states in the pipeline automaton above can be
used as desired statuses: Paused , Running , and Shutdown .
These statuses are selected by invoking REST endpoints shown
in the diagram. The user can monitor the current state of the pipeline via the
GET /v0/pipelines/{name} endpoint. In a typical scenario,
the user first sets the desired state, e.g., by invoking the
/start endpoint, and then polls the GET /v0/pipelines/{name}
endpoint to monitor the actual status of the pipeline until its
deployment_status attribute changes to Running indicating
that the pipeline has been successfully initialized and is
processing data, or Failed , indicating an error. | deployment_status_since date-time | description string | id uuid | name string | platform_version string | program_code string | program_config objectcache boolean — OPTIONALIf true (default), when a prior compilation with the same checksum
already exists, the output of that (i.e., binary) is used.
Set false to always trigger a new compilation, which might take longer
and as well can result in overriding an existing binary. | profile string — OPTIONALPossible values: [dev , unoptimized , optimized ] Enumeration of possible compilation profiles that can be passed to the Rust compiler
as an argument via cargo build --profile <> . A compilation profile affects among
other things the compilation speed (how long till the program is ready to be run)
and runtime speed (the performance while running). |
| program_info object — OPTIONALProgram information is the output of the SQL compiler. It includes information needed for Rust compilation (e.g., generated Rust code)
as well as only for runtime (e.g., schema, input/output connectors). input_connectors objectInput connectors derived from the schema. | main_rust string — OPTIONALGenerated main program Rust code: main.rs | output_connectors objectOutput connectors derived from the schema. | schema objectA struct containing the tables (inputs) and views for a program. Parse from the JSON data-type of the DDL generated by the SQL compiler. inputs object[]case_sensitive boolean | name string | fields object[]case_sensitive boolean | name string | columntype (circular) | default string — OPTIONAL | lateness string — OPTIONAL | watermark string — OPTIONAL |
| materialized boolean — OPTIONAL | properties object — OPTIONAL |
| outputs object[]case_sensitive boolean | name string | fields object[]case_sensitive boolean | name string | columntype (circular) | default string — OPTIONAL | lateness string — OPTIONAL | watermark string — OPTIONAL |
| materialized boolean — OPTIONAL | properties object — OPTIONAL |
|
| udf_stubs string — OPTIONALGenerated user defined function (UDF) stubs Rust code: stubs.rs |
| program_status Program compilation status. | program_status_since date-time | program_version int64 | runtime_config objectGlobal pipeline configuration settings. This is the publicly
exposed type for users to configure pipelines. clock_resolution_usecs int64 — OPTIONALReal-time clock resolution in microseconds. This parameter controls the execution of queries that use the NOW() function. The output of such
queries depends on the real-time clock and can change over time without any external
inputs. The pipeline will update the clock value and trigger incremental recomputation
at most each clock_resolution_usecs microseconds. It is set to 100 milliseconds (100,000 microseconds) by default. Set to null to disable periodic clock updates. | cpu_profiler boolean — OPTIONALEnable CPU profiler. The default value is true . | fault_tolerance object — OPTIONALFault-tolerance configuration for runtime startup. checkpoint_interval_secs int64 — OPTIONALInterval between automatic checkpoints, in seconds. The default is 60 seconds. A value of 0 disables automatic
checkpointing. |
| max_buffering_delay_usecs int64 — OPTIONALMaximal delay in microseconds to wait for min_batch_size_records to
get buffered by the controller, defaults to 0. | min_batch_size_records int64 — OPTIONALMinimal input batch size. The controller delays pushing input records to the circuit until at
least min_batch_size_records records have been received (total
across all endpoints) or max_buffering_delay_usecs microseconds
have passed since at least one input records has been buffered.
Defaults to 0. | min_storage_bytes integer — OPTIONALThe minimum estimated number of bytes in a batch of data to write it to
storage. This is provided for debugging and fine-tuning and should
ordinarily be left unset. It only has an effect when storage is set to
true. A value of 0 will write even empty batches to storage, and nonzero
values provide a threshold. usize::MAX would effectively disable
storage. | resources object — OPTIONALcpu_cores_max int64 — OPTIONALThe maximum number of CPU cores to reserve
for an instance of this pipeline | cpu_cores_min int64 — OPTIONALThe minimum number of CPU cores to reserve
for an instance of this pipeline | memory_mb_max int64 — OPTIONALThe maximum memory in Megabytes to reserve
for an instance of this pipeline | memory_mb_min int64 — OPTIONALThe minimum memory in Megabytes to reserve
for an instance of this pipeline | storage_class string — OPTIONALStorage class to use for an instance of this pipeline.
The class determines storage performance such as IOPS and throughput. | storage_mb_max int64 — OPTIONALThe total storage in Megabytes to reserve
for an instance of this pipeline |
| storage boolean — OPTIONALShould storage be enabled for this pipeline?
-
If false (default), the pipeline's state is kept in in-memory
data-structures. This is useful if the pipeline's state will fit in
memory and if the pipeline is ephemeral and does not need to be
recovered after a restart. The pipeline will most likely run faster
since it does not need to access storage.
-
If true , the pipeline's state is kept on storage. This allows the
pipeline to work with state that will not fit into memory. It also
allows the state to be checkpointed and recovered across restarts.
This feature is currently experimental.
| tracing boolean — OPTIONAL | tracing_endpoint_jaeger string — OPTIONALJaeger tracing endpoint to send tracing information to. | workers int32 — OPTIONALNumber of DBSP worker threads. |
| udf_rust string | udf_toml string | version int64 |
|
400
Pipeline needs to be shutdown to be modified Schema — OPTIONAL |
---|
details objectDetailed error metadata.
The contents of this field is determined by error_code . | error_code stringError code is a string that specifies this error type. | message stringHuman-readable error message. |
|
409
Cannot rename pipeline as the name already exists Schema — OPTIONAL |
---|
details objectDetailed error metadata.
The contents of this field is determined by error_code . | error_code stringError code is a string that specifies this error type. | message stringHuman-readable error message. |
|