| 200
 Pipeline successfully updated| Schema  —  OPTIONAL | 
|---|
 | created_atdate-time |  | deployment_desired_statusstringPossible values: [Stopped,Unavailable,Standby,Paused,Running,Suspended] |  | deployment_desired_status_sincedate-time |  | deployment_errorobject —  OPTIONALInformation returned by REST API endpoints on error.| detailsobjectDetailed error metadata.
The contents of this field is determined by error_code. |  | error_codestringError code is a string that specifies this error type. |  | messagestringHuman-readable error message. | 
 |  | deployment_iduuid —  OPTIONAL |  | deployment_initialstring —  OPTIONALPossible values: [Unavailable,Standby,Paused,Running,Suspended] |  | deployment_resources_desired_statusstringPossible values: [Stopped,Provisioned] |  | deployment_resources_desired_status_sincedate-time |  | deployment_resources_statusstringPossible values: [Stopped,Provisioning,Provisioned,Stopping]Pipeline resources status. /start (early start failed)┌───────────────────┐
 │                   ▼
 Stopped ◄────────── Stopping
 /start │                   ▲
 │                   │ /stop?force=true
 │                   │ OR: timeout (from Provisioning)
 ▼                   │ OR: fatal runtime or resource error
 ⌛Provisioning ────────────│ OR: runtime status is Suspended
 │                   │
 │                   │
 ▼                   │
 Provisioned ─────────────┘
 
Desired and actual statusWe use the desired state model to manage the lifecycle of a pipeline. In this model, the
pipeline has two status attributes associated with it: the desired status, which represents
what the user would like the pipeline to do, and the current status, which represents the
actual (last observed) status of the pipeline. The pipeline runner service continuously monitors
the desired status field to decide where to steer the pipeline towards. There are two desired statuses: 
Provisioned(set by invoking/start)Stopped(set by invoking/stop?force=true) The user can monitor the current status of the pipeline via the GET /v0/pipelines/{name}endpoint. In a typical scenario, the user first sets the desired status, e.g., by invoking the/startendpoint, and then polls theGET /v0/pipelines/{name}endpoint to monitor the actual
status of the pipeline until itsdeployment_resources_statusattribute changes toProvisionedindicating that the pipeline has been successfully provisioned, orStoppedwithdeployment_errorbeing set. |  | deployment_resources_status_sincedate-time |  | deployment_runtime_desired_statusstring —  OPTIONALPossible values: [Unavailable,Standby,Paused,Running,Suspended] |  | deployment_runtime_desired_status_sincedate-time —  OPTIONAL |  | deployment_runtime_statusstring —  OPTIONALPossible values: [Unavailable,Standby,Initializing,AwaitingApproval,Bootstrapping,Replaying,Paused,Running,Suspended]Runtime status of the pipeline. Of the statuses, only Unavailableis determined by the runner. All other statuses are
determined by the pipeline and taken over by the runner. |  | deployment_runtime_status_sincedate-time —  OPTIONAL |  | deployment_statusstringPossible values: [Stopped,Provisioning,Unavailable,Standby,AwaitingApproval,Initializing,Bootstrapping,Replaying,Paused,Running,Suspended,Stopping] |  | deployment_status_sincedate-time |  | descriptionstring |  | iduuid |  | namestring |  | platform_versionstring |  | program_codestring |  | program_configobject| cacheboolean —  OPTIONALIf true(default), when a prior compilation with the same checksum
already exists, the output of that (i.e., binary) is used.
Setfalseto always trigger a new compilation, which might take longer
and as well can result in overriding an existing binary. |  | profilestring —  OPTIONALPossible values: [dev,unoptimized,optimized,optimized_symbols]Enumeration of possible compilation profiles that can be passed to the Rust compiler
as an argument via cargo build --profile <>. A compilation profile affects among
other things the compilation speed (how long till the program is ready to be run)
and runtime speed (the performance while running). |  | runtime_versionstring —  OPTIONALOverride runtime version of the pipeline being executed. Warning: This setting is experimental and may change in the future.
Requires the platform to run with the unstable feature runtime_versionenabled. Should only be used for testing purposes, and requires
network access. A runtime version can be specified in the form of a version
or SHA taken from the feldera/felderarepository main branch. Examples: v0.96.0orf4dcac0989ca0fda7d2eb93602a49d007cb3b0ae A platform of version 0.x.ymay be capable of running future and past
runtimes with versions>=0.x.yand<=0.x.yuntil breaking API changes happen,
the exact bounds for each platform version are unspecified until we reach a
stable version. Compatibility is only guaranteed if platform and runtime version
are exact matches. Note that any enterprise features are currently considered to be part of
the platform. If not set (null), the runtime version will be the same as the platform version. | 
 |  | program_errorobjectLog, warning and error information about the program compilation.| rust_compilationobject —  OPTIONALRust compilation information.| exit_codeint32Exit code of the cargocompilation command. |  | stderrstringOutput printed to stderr by the cargocompilation command. |  | stdoutstringOutput printed to stdout by the cargocompilation command. | 
 |  | sql_compilationobject —  OPTIONALSQL compilation information.| exit_codeint32Exit code of the SQL compiler. |  | messagesobject[]Messages (warnings and errors) generated by the SQL compiler.| end_columninteger |  | end_line_numberinteger |  | error_typestring |  | messagestring |  | snippetstring —  OPTIONAL |  | start_columninteger |  | start_line_numberinteger |  | warningboolean | 
 | 
 |  | system_errorstring —  OPTIONALSystem error that occurred. 
Set Some(...)upon transition toSystemErrorSet Noneupon transition toPending | 
 |  | program_infoobject —  OPTIONALProgram information is the result of the SQL compilation.| input_connectorsobjectInput connectors derived from the schema. |  | output_connectorsobjectOutput connectors derived from the schema. |  | schemaobjectA struct containing the tables (inputs) and views for a program. Parse from the JSON data-type of the DDL generated by the SQL compiler.| inputsobject[]| case_sensitiveboolean |  | namestring |  | fieldsobject[]| case_sensitiveboolean |  | namestring |  | columntype(circular) |  | defaultstring —  OPTIONAL |  | latenessstring —  OPTIONAL |  | unusedboolean |  | watermarkstring —  OPTIONAL | 
 |  | materializedboolean —  OPTIONAL |  | propertiesobject —  OPTIONAL | 
 |  | outputsobject[]| case_sensitiveboolean |  | namestring |  | fieldsobject[]| case_sensitiveboolean |  | namestring |  | columntype(circular) |  | defaultstring —  OPTIONAL |  | latenessstring —  OPTIONAL |  | unusedboolean |  | watermarkstring —  OPTIONAL | 
 |  | materializedboolean —  OPTIONAL |  | propertiesobject —  OPTIONAL | 
 | 
 |  | udf_stubsstringGenerated user defined function (UDF) stubs Rust code: stubs.rs | 
 |  | program_statusstringPossible values: [Pending,CompilingSql,SqlCompiled,CompilingRust,Success,SqlError,RustError,SystemError]Program compilation status. |  | program_status_sincedate-time |  | program_versionint64 |  | refresh_versionint64 |  | runtime_configobjectGlobal pipeline configuration settings. This is the publicly
exposed type for users to configure pipelines.| checkpoint_during_suspendboolean —  OPTIONALDeprecated: setting this true or false does not have an effect anymore. |  | clock_resolution_usecsint64 —  OPTIONALReal-time clock resolution in microseconds. This parameter controls the execution of queries that use the NOW()function.  The output of such
queries depends on the real-time clock and can change over time without any external
inputs.  If the query usesNOW(), the pipeline will update the clock value and trigger incremental
recomputation at most eachclock_resolution_usecsmicroseconds.  If the query does not useNOW(), then clock value updates are suppressed and the pipeline ignores this setting. It is set to 1 second (1,000,000 microseconds) by default. |  | cpu_profilerboolean —  OPTIONALEnable CPU profiler. The default value is true. |  | dev_tweaksobject —  OPTIONALOptional settings for tweaking Feldera internals. The available key-value pairs change from one version of Feldera to
another, so users should not depend on particular settings being
available, or on their behavior. |  | fault_toleranceobject —  OPTIONALFault-tolerance configuration. The default [FtConfig] (via [FtConfig::default]) disables fault tolerance,
which is the configuration that one gets if [RuntimeConfig] omits fault
tolerance configuration. The default value for [FtConfig::model] enables fault tolerance, as
Some(FtModel::default()).  This is the configuration that one gets if
[RuntimeConfig] includes a fault tolerance configuration but does not
specify a particular model.| checkpoint_interval_secsint64 —  OPTIONALInterval between automatic checkpoints, in seconds. The default is 60 seconds.  Values less than 1 or greater than 3600 will
be forced into that range. |  | model—  OPTIONAL | 
 |  | http_workersint64 —  OPTIONALSets the number of available runtime threads for the http server. In most cases, this does not need to be set explicitly and
the default is sufficient. Can be increased in case the
pipeline HTTP API operations are a bottleneck. If not specified, the default is set to workers. |  | init_containers—  OPTIONALSpecification of additional (sidecar) containers. |  | io_workersint64 —  OPTIONALSets the number of available runtime threads for async IO tasks. This affects some networking and file I/O operations
especially adapters and ad-hoc queries. In most cases, this does not need to be set explicitly and
the default is sufficient. Can be increased in case
ingress, egress or ad-hoc queries are a bottleneck. If not specified, the default is set to workers. |  | loggingstring —  OPTIONALLog filtering directives. If set to a valid tracing-subscriber filter, this controls the log
messages emitted by the pipeline process.  Otherwise, or if the filter
has invalid syntax, messages at "info" severity and higher are written
to the log and all others are discarded. |  | max_buffering_delay_usecsint64 —  OPTIONALMaximal delay in microseconds to wait for min_batch_size_recordsto
get buffered by the controller, defaults to 0. |  | max_parallel_connector_initint64 —  OPTIONALThe maximum number of connectors initialized in parallel during pipeline
startup. At startup, the pipeline must initialize all of its input and output connectors.
Depending on the number and types of connectors, this can take a long time.
To accelerate the process, multiple connectors are initialized concurrently.
This option controls the maximum number of connectors that can be initialized
in parallel. The default is 10. |  | min_batch_size_recordsint64 —  OPTIONALMinimal input batch size. The controller delays pushing input records to the circuit until at
least min_batch_size_recordsrecords have been received (total
across all endpoints) ormax_buffering_delay_usecsmicroseconds
have passed since at least one input records has been buffered.
Defaults to 0. |  | pin_cpusinteger[] —  OPTIONALOptionally, a list of CPU numbers for CPUs to which the pipeline may pin
its worker threads.  Specify at least twice as many CPU numbers as
workers.  CPUs are generally numbered starting from 0.  The pipeline
might not be able to honor CPU pinning requests. CPU pinning can make pipelines run faster and perform more consistently,
as long as different pipelines running on the same machine are pinned to
different CPUs. |  | provisioning_timeout_secsint64 —  OPTIONALTimeout in seconds for the Provisioningphase of the pipeline.
Setting this value will override the default of the runner. |  | resourcesobject —  OPTIONAL| cpu_cores_maxdouble —  OPTIONALThe maximum number of CPU cores to reserve
for an instance of this pipeline |  | cpu_cores_mindouble —  OPTIONALThe minimum number of CPU cores to reserve
for an instance of this pipeline |  | memory_mb_maxint64 —  OPTIONALThe maximum memory in Megabytes to reserve
for an instance of this pipeline |  | memory_mb_minint64 —  OPTIONALThe minimum memory in Megabytes to reserve
for an instance of this pipeline |  | namespacestring —  OPTIONALKubernetes namespace to use for an instance of this pipeline.
The namespace determines the scope of names for resources created
for the pipeline.
If not set, the pipeline will be deployed in the same namespace
as the control-plane. |  | service_account_namestring —  OPTIONALKubernetes service account name to use for an instance of this pipeline.
The account determines permissions and access controls. |  | storage_classstring —  OPTIONALStorage class to use for an instance of this pipeline.
The class determines storage performance such as IOPS and throughput. |  | storage_mb_maxint64 —  OPTIONALThe total storage in Megabytes to reserve
for an instance of this pipeline | 
 |  | storageobject —  OPTIONALStorage configuration for a pipeline.| backend—  OPTIONALBackend storage configuration. |  | cache_mibinteger —  OPTIONALThe maximum size of the in-memory storage cache, in MiB. If set, the specified cache size is spread across all the foreground and
background threads. If unset, each foreground or background thread cache
is limited to 256 MiB. |  | compressionstring —  OPTIONALPossible values: [default,none,snappy]Storage compression algorithm. |  | min_step_storage_bytesinteger —  OPTIONALFor a batch of data passed through the pipeline during a single step,
the minimum estimated number of bytes to write it to storage. This is provided for debugging and fine-tuning and should ordinarily be
left unset.  A value of 0 will write even empty batches to storage, and
nonzero values provide a threshold.  usize::MAX, the default,
effectively disables storage for such batches.  If it is set to another
value, it should ordinarily be greater than or equal tomin_storage_bytes. |  | min_storage_bytesinteger —  OPTIONALFor a batch of data maintained as part of a persistent index during a
pipeline run, the minimum estimated number of bytes to write it to
storage. This is provided for debugging and fine-tuning and should ordinarily be
left unset. A value of 0 will write even empty batches to storage, and nonzero
values provide a threshold.  usize::MAXwould effectively disable
storage for such batches.  The default is 1,048,576 (1 MiB). | 
 |  | tracingboolean —  OPTIONAL |  | tracing_endpoint_jaegerstring —  OPTIONALJaeger tracing endpoint to send tracing information to. |  | workersint32 —  OPTIONALNumber of DBSP worker threads. Each DBSP "foreground" worker thread is paired with a "background"
thread for LSM merging, making the total number of threads twice the
specified number. The typical sweet spot for the number of workers is between 4 and 16.
Each worker increases overall memory consumption for data structures
used during a step. | 
 |  | storage_statusstringPossible values: [Cleared,InUse,Clearing]Storage status. The storage status can only transition when the resources status is Stopped. Cleared ───┐▲       │
 /clear │       │
 │       │
 Clearing   │
 ▲       │
 │       │
 InUse ◄───┘
 
 |  | udf_ruststring |  | udf_tomlstring |  | versionint64 | 
 |