feldera package
feldera.pipeline_builder module
- class feldera.pipeline_builder.PipelineBuilder(client: ~feldera.rest.feldera_client.FelderaClient, name: str, sql: str, udf_rust: str = '', udf_toml: str = '', description: str = '', compilation_profile: ~feldera.enums.CompilationProfile = CompilationProfile.OPTIMIZED, runtime_config: ~feldera.runtime_config.RuntimeConfig = <feldera.runtime_config.RuntimeConfig object>)[source]
Bases:
object
A builder for creating a Feldera Pipeline.
- Parameters:
client – The .FelderaClient instance
name – The name of the pipeline
description – The description of the pipeline
sql – The SQL code of the pipeline
udf_rust – Rust code for UDFs
udf_toml – Rust dependencies required by UDFs (in the TOML format)
compilation_profile – The compilation profile to use
runtime_config – The runtime config to use
feldera.pipeline module
- class feldera.pipeline.Pipeline(client: FelderaClient)[source]
Bases:
object
- checkpoint()[source]
Checkpoints this pipeline, if fault-tolerance is enabled. Fault Tolerance in Feldera: <https://docs.feldera.com/fault-tolerance/>
- Raises:
FelderaAPIError – If checkpointing is not enabled.
- delete()[source]
Deletes the pipeline.
The pipeline must be shutdown before it can be deleted.
- Raises:
FelderaAPIError – If the pipeline is not in SHUTDOWN state.
- deployment_desired_status() PipelineStatus [source]
Return the desired deployment status of the pipeline. This is the next state that the pipeline should transition to.
- deployment_error() Mapping[str, Any] [source]
Return the deployment error of the pipeline. Returns an empty string if there is no error.
- deployment_location() str [source]
Return the deployment location of the pipeline. Deployment location is the location where the pipeline can be reached at runtime (a TCP port number or a URI).
- deployment_status_since() datetime [source]
Return the timestamp when the current deployment status of the pipeline was set.
- execute(query: str)[source]
Executes an ad-hoc SQL query on the current pipeline, discarding its result. Unlike the
query()
method which returns a generator for retrieving query results lazily, this method processes the query eagerly and fully before returning.This method is suitable for SQL operations like
INSERT
andDELETE
, where the user needs confirmation of successful query execution, but does not require the query result. If the query fails, an exception will be raised.- Important:
If you try to
INSERT
orDELETE
data from a table while the pipeline is paused, it will block until the pipeline is resumed.
- Parameters:
query – The SQL query to be executed.
- Raises:
FelderaAPIError – If the pipeline is not in a RUNNING state.
FelderaAPIError – If the query is invalid.
- foreach_chunk(view_name: str, callback: Callable[[DataFrame, int], None])[source]
Run the given callback on each chunk of the output of the specified view.
You must call this method before starting the pipeline to operate on the entire output. You can call this method after the pipeline has started, but you will only get the output from that point onwards.
- Parameters:
view_name – The name of the view.
callback –
The callback to run on each chunk. The callback should take two arguments:
chunk -> The chunk as a pandas DataFrame
seq_no -> The sequence number. The sequence number is a monotonically increasing integer that starts from 0. Note that the sequence number is unique for each chunk, but not necessarily contiguous.
Please note that the callback is run in a separate thread, so it should be thread-safe. Please note that the callback should not block for a long time, as by default, backpressure is enabled and will block the pipeline.
Note
The callback must be thread-safe as it will be run in a separate thread.
- static get(name: str, client: FelderaClient) Pipeline [source]
Get the pipeline if it exists.
- Parameters:
name – The name of the pipeline.
client – The FelderaClient instance.
- input_json(table_name: str, data: Dict | list, update_format: str = 'raw', force: bool = False)[source]
Push this JSON data to the specified table of the pipeline.
The pipeline must either be in RUNNING or PAUSED states to push data. An error will be raised if the pipeline is in any other state.
- Parameters:
table_name – The name of the table to push data into.
data – The JSON encoded data to be pushed to the pipeline. The data should be in the form: {‘col1’: ‘val1’, ‘col2’: ‘val2’} or [{‘col1’: ‘val1’, ‘col2’: ‘val2’}, {‘col1’: ‘val1’, ‘col2’: ‘val2’}]
update_format – The update format of the JSON data to be pushed to the pipeline. Must be one of: “raw”, “insert_delete”. <https://docs.feldera.com/formats/json#the-insertdelete-format>
force – True to push data even if the pipeline is paused. False by default.
- Raises:
ValueError – If the update format is invalid.
FelderaAPIError – If the pipeline is not in a valid state to push data.
RuntimeError – If the pipeline is paused and force is not set to True.
- input_pandas(table_name: str, df: DataFrame, force: bool = False)[source]
Push all rows in a pandas DataFrame to the pipeline.
The pipeline must either be in RUNNING or PAUSED states to push data. An error will be raised if the pipeline is in any other state.
The dataframe must have the same columns as the table in the pipeline.
- Parameters:
table_name – The name of the table to insert data into.
df – The pandas DataFrame to be pushed to the pipeline.
force – True to push data even if the pipeline is paused. False by default.
- Raises:
ValueError – If the table does not exist in the pipeline.
RuntimeError – If the pipeline is not in a valid state to push data.
RuntimeError – If the pipeline is paused and force is not set to True.
- listen(view_name: str) OutputHandler [source]
Follow the change stream (i.e., the output) of the provided view. Returns an output handler to read the changes.
When the pipeline is shutdown, these listeners are dropped.
You must call this method before starting the pipeline to get the entire output of the view. If this method is called once the pipeline has started, you will only get the output from that point onwards.
- Parameters:
view_name – The name of the view to listen to.
- property name: str
Return the name of the pipeline.
- pause(timeout_s: float | None = None)[source]
Pause the pipeline.
The pipeline can only transition to the PAUSED state from the RUNNING state. If the pipeline is already paused, it will remain in the PAUSED state.
- Parameters:
timeout_s – The maximum time (in seconds) to wait for the pipeline to pause.
- Raises:
FelderaAPIError – If the pipeline is in FAILED state.
- pause_connector(table_name: str, connector_name: str)[source]
Pause the specified input connector.
Connectors allow feldera to fetch data from a source or write data to a sink. This method allows users to PAUSE a specific INPUT connector. All connectors are RUNNING by default.
Refer to the connector documentation for more information: <https://docs.feldera.com/connectors/#input-connector-orchestration>
- Parameters:
table_name – The name of the table that the connector is attached to.
connector_name – The name of the connector to pause.
- Raises:
FelderaAPIError – If the connector is not found, or if the pipeline is not running.
- program_binary_url() str [source]
Return the program binary URL of the pipeline. This is the URL where the compiled program binary can be downloaded from.
- program_info() Mapping[str, Any] [source]
Return the program info of the pipeline. This is the output returned by the SQL compiler, including: the list of input and output connectors, the generated Rust code for the pipeline, and the SQL program schema.
- program_status() ProgramStatus [source]
Return the program status of the pipeline.
Program status is the status of compilation of this SQL program. We first compile the SQL program to Rust code, and then compile the Rust code to a binary.
- program_status_since() datetime [source]
Return the timestamp when the current program status was set.
- query(query: str) Generator[Mapping[str, Any], None, None] [source]
Executes an ad-hoc SQL query on this pipeline and returns a generator that yields the rows of the result as Python dictionaries. For
INSERT
andDELETE
queries, consider usingexecute()
instead. All floating-point numbers are deserialized as Decimal objects to avoid precision loss.- Note:
You can only
SELECT
from materialized tables and views.- Important:
This method is lazy. It returns a generator and is not evaluated until you consume the result.
- Parameters:
query – The SQL query to be executed.
- Returns:
A generator that yields the rows of the result as Python dictionaries.
- Raises:
FelderaAPIError – If the pipeline is not in a RUNNING or PAUSED state.
FelderaAPIError – If querying a non materialized table or view.
FelderaAPIError – If the query is invalid.
- query_parquet(query: str, path: str)[source]
Executes an ad-hoc SQL query on this pipeline and saves the result to the specified path as a parquet file. If the extension isn’t parquet, it will be automatically appended to path.
- Note:
You can only
SELECT
from materialized tables and views.
- Parameters:
query – The SQL query to be executed.
path – The path of the parquet file.
- Raises:
FelderaAPIError – If the pipeline is not in a RUNNING or PAUSED state.
FelderaAPIError – If querying a non materialized table or view.
FelderaAPIError – If the query is invalid.
- query_tabular(query: str) Generator[str, None, None] [source]
Executes a SQL query on this pipeline and returns the result as a formatted string.
- Note:
You can only
SELECT
from materialized tables and views.- Important:
This method is lazy. It returns a generator and is not evaluated until you consume the result.
- Parameters:
query – The SQL query to be executed.
- Returns:
A generator that yields a string representing the query result in a human-readable, tabular format.
- Raises:
FelderaAPIError – If the pipeline is not in a RUNNING or PAUSED state.
FelderaAPIError – If querying a non materialized table or view.
FelderaAPIError – If the query is invalid.
- refresh()[source]
Calls the backend to get the updated, latest version of the pipeline.
- Raises:
FelderaConnectionError – If there is an issue connecting to the backend.
- restart(timeout_s: float | None = None)[source]
Restarts the pipeline.
This method SHUTS DOWN the pipeline regardless of its current state and then starts it again.
- Parameters:
timeout_s – The maximum time (in seconds) to wait for the pipeline to restart.
- resume(timeout_s: float | None = None)[source]
Resumes the pipeline from the PAUSED state. If the pipeline is already running, it will remain in the RUNNING state.
- Parameters:
timeout_s – The maximum time (in seconds) to wait for the pipeline to shut down.
- Raises:
FelderaAPIError – If the pipeline is in FAILED state.
- resume_connector(table_name: str, connector_name: str)[source]
Resume the specified connector.
Connectors allow feldera to fetch data from a source or write data to a sink. This method allows users to RESUME / START a specific INPUT connector. All connectors are RUNNING by default.
Refer to the connector documentation for more information: <https://docs.feldera.com/connectors/#input-connector-orchestration>
- Parameters:
table_name – The name of the table that the connector is attached to.
connector_name – The name of the connector to resume.
- Raises:
FelderaAPIError – If the connector is not found, or if the pipeline is not running.
- shutdown(timeout_s: float | None = None)[source]
Shut down the pipeline.
Shuts down the pipeline regardless of its current state.
- Parameters:
timeout_s – The maximum time (in seconds) to wait for the pipeline to shut down.
- start(timeout_s: float | None = None)[source]
Starts this pipeline.
The pipeline must be in SHUTDOWN state to start. If the pipeline is in any other state, an error will be raised. If the pipeline is in PAUSED state, use .meth:resume instead. If the pipeline is in FAILED state, it must be shutdown before starting it again.
- Parameters:
timeout_s – The maximum time (in seconds) to wait for the pipeline to start.
- Raises:
RuntimeError – If the pipeline is not in SHUTDOWN state.
- status() PipelineStatus [source]
Return the current status of the pipeline.
- wait_for_completion(shutdown: bool = False, timeout_s: float | None = None)[source]
Block until the pipeline has completed processing all input records.
This method blocks until (1) all input connectors attached to the pipeline have finished reading their input data sources and issued end-of-input notifications to the pipeline, and (2) all inputs received from these connectors have been fully processed and corresponding outputs have been sent out through the output connectors.
This method will block indefinitely if at least one of the input connectors attached to the pipeline is a streaming connector, such as Kafka, that does not issue the end-of-input notification.
- Parameters:
shutdown – If True, the pipeline will be shutdown after completion. False by default.
timeout_s – Optional. The maximum time (in seconds) to wait for the pipeline to complete. The default is None, which means wait indefinitely.
- Raises:
RuntimeError – If the pipeline returns unknown metrics.
- wait_for_idle(idle_interval_s: float = 5.0, timeout_s: float = 600.0, poll_interval_s: float = 0.2)[source]
Wait for the pipeline to become idle and then returns.
Idle is defined as a sufficiently long interval in which the number of input and processed records reported by the pipeline do not change, and they equal each other (thus, all input records present at the pipeline have been processed).
- Parameters:
idle_interval_s – Idle interval duration (default is 5.0 seconds).
timeout_s – Timeout waiting for idle (default is 600.0 seconds).
poll_interval_s – Polling interval, should be set substantially smaller than the idle interval (default is 0.2 seconds).
- Raises:
ValueError – If idle interval is larger than timeout, poll interval is larger than timeout, or poll interval is larger than idle interval.
RuntimeError – If the metrics are missing or the timeout was reached.
feldera.enums module
- class feldera.enums.BuildMode(value)[source]
Bases:
Enum
An enumeration.
- CREATE = 1
- GET = 2
- GET_OR_CREATE = 3
- class feldera.enums.CompilationProfile(value)[source]
Bases:
Enum
The compilation profile to use when compiling the program.
- DEV = 'dev'
The development compilation profile.
- OPTIMIZED = 'optimized'
The optimized compilation profile, the default for this API.
- SERVER_DEFAULT = None
The compiler server default compilation profile.
- UNOPTIMIZED = 'unoptimized'
The unoptimized compilation profile.
- class feldera.enums.PipelineStatus(value)[source]
Bases:
Enum
Represents the state that this pipeline is currently in.
Shutdown ◄────┐ │ │ /deploy │ │ │ ⌛ShuttingDown ▼ ▲ ⌛Provisioning │ │ │ Provisioned │ ▼ │/shutdown ⌛Initializing │ │ │ ┌────────┴─────────┴─┐ │ ▼ │ │ Paused │ │ │ ▲ │ │/start│ │/pause │ │ ▼ │ │ │ Running │ └──────────┬─────────┘ │ ▼ Failed
- FAILED = 8
The pipeline remains in this state until the users acknowledge the failure by issuing a call to shutdown the pipeline; transitions to the PipelineStatus.SHUTDOWN state.
- INITIALIZING = 4
The pipeline is initializing its internal state and connectors.
This state is part of the pipeline’s deployment process. In this state, the pipeline’s HTTP server is up and running, but its query engine and input and output connectors are still initializing.
The pipeline remains in this state until:
Initialization completes successfully; the pipeline transitions to the PipelineStatus.PAUSED state.
Initialization fails; transitions to the PipelineStatus.FAILED state.
A pre-defined timeout has passed. The runner performs forced shutdown of the pipeline; returns to the PipelineStatus.SHUTDOWN state.
The user cancels the pipeline by invoking the /shutdown endpoint. The manager performs forced shutdown of the pipeline; returns to the PipelineStatus.SHUTDOWN state.
- NOT_FOUND = 1
The pipeline has not been created yet.
- PAUSED = 5
The pipeline is fully initialized, but data processing has been paused.
The pipeline remains in this state until:
The user starts the pipeline by invoking the /start endpoint. The manager passes the request to the pipeline; transitions to the PipelineStatus.RUNNING state.
The user cancels the pipeline by invoking the /shutdown endpoint. The manager passes the shutdown request to the pipeline to perform a graceful shutdown; transitions to the PipelineStatus.SHUTTING_DOWN state.
An unexpected runtime error renders the pipeline PipelineStatus.FAILED.
- PROVISIONING = 3
The runner triggered a deployment of the pipeline and is waiting for the pipeline HTTP server to come up.
In this state, the runner provisions a runtime for the pipeline, starts the pipeline within this runtime and waits for it to start accepting HTTP requests.
The user is unable to communicate with the pipeline during this time. The pipeline remains in this state until:
Its HTTP server is up and running; the pipeline transitions to the PipelineStatus.INITIALIZING state.
A pre-defined timeout has passed. The runner performs forced shutdown of the pipeline; returns to the PipelineStatus.SHUTDOWN state.
The user cancels the pipeline by invoking the /shutdown endpoint. The manager performs forced shutdown of the pipeline, returns to the PipelineStatus.SHUTDOWN state.
- RUNNING = 6
The pipeline is processing data.
The pipeline remains in this state until:
The user pauses the pipeline by invoking the /pause endpoint. The manager passes the request to the pipeline; transitions to the PipelineStatus.PAUSED state.
The user cancels the pipeline by invoking the /shutdown endpoint. The runner passes the shutdown request to the pipeline to perform a graceful shutdown; transitions to the PipelineStatus.SHUTTING_DOWN state.
An unexpected runtime error renders the pipeline PipelineStatus.FAILED.
- SHUTDOWN = 2
Pipeline has not been started or has been shut down.
The pipeline remains in this state until the user triggers a deployment by invoking the /deploy endpoint.
- SHUTTING_DOWN = 7
Graceful shutdown in progress.
In this state, the pipeline finishes any ongoing data processing, produces final outputs, shuts down input/output connectors and terminates.
The pipeline remains in this state until:
Shutdown completes successfully; transitions to the PipelineStatus.SHUTDOWN state.
A pre-defined timeout has passed. The manager performs forced shutdown of the pipeline; returns to the PipelineStatus.SHUTDOWN state.
- UNAVAILABLE = 9
The pipeline was at least once initialized, but in the most recent status check either could not be reached or returned it is not yet ready.
feldera.output_handler module
- class feldera.output_handler.OutputHandler(client: FelderaClient, pipeline_name: str, view_name: str, queue: Queue | None)[source]
Bases:
object
feldera.runtime_config module
- class feldera.runtime_config.Resources(config: Mapping[str, Any] | None = None, cpu_cores_max: int | None = None, cpu_cores_min: int | None = None, memory_mb_max: int | None = None, memory_mb_min: int | None = None, storage_class: str | None = None, storage_mb_max: int | None = None)[source]
Bases:
object
Class used to specify the resource configuration for a pipeline.
- Parameters:
config – A dictionary containing all the configuration values.
cpu_cores_max – The maximum number of CPU cores to reserve for an instance of the pipeline.
cpu_cores_min – The minimum number of CPU cores to reserve for an instance of the pipeline.
memory_mb_max – The maximum memory in Megabytes to reserve for an instance of the pipeline.
memory_mb_min – The minimum memory in Megabytes to reserve for an instance of the pipeline.
storage_class – The storage class to use for the pipeline. The class determines storage performance such as IOPS and throughput.
storage_mb_max – The storage in Megabytes to reserve for an instance of the pipeline.
- class feldera.runtime_config.RuntimeConfig(workers: int | None = None, storage: bool | None = False, tracing: bool | None = False, tracing_endpoint_jaeger: str | None = '', cpu_profiler: bool = True, max_buffering_delay_usecs: int = 0, min_batch_size_records: int = 0, min_storage_bytes: int | None = None, clock_resolution_usecs: int | None = None, resources: Resources | None = None)[source]
Bases:
object
Runtime configuration class to define the configuration for a pipeline.
Subpackages
- feldera.rest package
- Submodules
- feldera.rest.config module
- feldera.rest.errors module
- feldera.rest.feldera_client module
FelderaClient
FelderaClient.checkpoint_pipeline()
FelderaClient.create_or_update_pipeline()
FelderaClient.create_pipeline()
FelderaClient.delete_pipeline()
FelderaClient.get_pipeline()
FelderaClient.get_pipeline_stats()
FelderaClient.get_runtime_config()
FelderaClient.listen_to_pipeline()
FelderaClient.localhost()
FelderaClient.patch_pipeline()
FelderaClient.pause_connector()
FelderaClient.pause_pipeline()
FelderaClient.pipelines()
FelderaClient.push_to_pipeline()
FelderaClient.query_as_json()
FelderaClient.query_as_parquet()
FelderaClient.query_as_text()
FelderaClient.resume_connector()
FelderaClient.shutdown_pipeline()
FelderaClient.start_pipeline()
- feldera.rest.pipeline module
- feldera.rest.sql_table module
- feldera.rest.sql_view module
- Module contents