feldera.rest package
Submodules
feldera.rest.config module
- class feldera.rest.config.Config(url: str, api_key: str | None = None, version: str | None = None, timeout: float | None = None)[source]
Bases:
object
FelderaClient
’s credentials and configuration parameters
feldera.rest.errors module
- exception feldera.rest.errors.FelderaAPIError(error: str, request: Response)[source]
Bases:
FelderaError
Error sent by Feldera API
- exception feldera.rest.errors.FelderaCommunicationError(err: str)[source]
Bases:
FelderaError
Error when connection to Feldera
- exception feldera.rest.errors.FelderaError(message: str)[source]
Bases:
Exception
Generic class for Feldera error handling
- exception feldera.rest.errors.FelderaTimeoutError(err: str)[source]
Bases:
FelderaError
Error when Feldera operation takes longer than expected
feldera.rest.feldera_client module
- class feldera.rest.feldera_client.FelderaClient(url: str, api_key: str | None = None, timeout: int | None = None)[source]
Bases:
object
A client for the Feldera HTTP API
A client instance is needed for every Feldera API method to know the location of Feldera and its permissions.
- checkpoint_pipeline(pipeline_name: str)[source]
Checkpoint a fault-tolerant pipeline
- Parameters:
pipeline_name – The name of the pipeline to checkpoint
- create_or_update_pipeline(pipeline: Pipeline) Pipeline [source]
Create a pipeline if it doesn’t exist or update a pipeline and wait for it to compile
- create_pipeline(pipeline: Pipeline) Pipeline [source]
Create a pipeline if it doesn’t exist and wait for it to compile
- Name:
The name of the pipeline
- delete_pipeline(name: str)[source]
Deletes a pipeline by name
- Parameters:
name – The name of the pipeline
- get_pipeline(pipeline_name) Pipeline [source]
Get a pipeline by name
- Parameters:
pipeline_name – The name of the pipeline
- get_pipeline_stats(name: str) dict [source]
Get the pipeline metrics and performance counters
- Parameters:
name – The name of the pipeline
- get_runtime_config(pipeline_name) dict [source]
Get the runtime config of a pipeline by name
- Parameters:
pipeline_name – The name of the pipeline
- listen_to_pipeline(pipeline_name: str, table_name: str, format: str, backpressure: bool = True, array: bool = False, timeout: float | None = None)[source]
Listen for updates to views for pipeline, yields the chunks of data
- Parameters:
pipeline_name – The name of the pipeline
table_name – The name of the table to listen to
format – The format of the data, either “json” or “csv”
backpressure – When the flag is True (the default), this method waits for the consumer to receive each chunk and blocks the pipeline if the consumer cannot keep up. When this flag is False, the pipeline drops data chunks if the consumer is not keeping up with its output. This prevents a slow consumer from slowing down the entire pipeline.
array – Set True to group updates in this stream into JSON arrays, used in conjunction with the “json” format, the default value is False
timeout – The amount of time in seconds to listen to the stream for
- patch_pipeline(name: str, sql: str)[source]
Incrementally update the pipeline SQL
- Parameters:
name – The name of the pipeline
sql – The SQL snippet. Replaces the existing SQL code with this one.
- pause_pipeline(pipeline_name: str)[source]
Stop a pipeline
- Parameters:
pipeline_name – The name of the pipeline to stop
- push_to_pipeline(pipeline_name: str, table_name: str, format: str, data: list[list | str | dict], array: bool = False, force: bool = False, update_format: str = 'raw', json_flavor: str | None = None, serialize: bool = True)[source]
Insert data into a pipeline
- Parameters:
pipeline_name – The name of the pipeline
table_name – The name of the table
format – The format of the data, either “json” or “csv”
array – True if updates in this stream are packed into JSON arrays, used in conjunction with the “json” format
force – If True, the data will be inserted even if the pipeline is paused
update_format – JSON data change event format, used in conjunction with the “json” format, the default value is “insert_delete”, other supported formats: “weighted”, “debezium”, “snowflake”, “raw”
json_flavor – JSON encoding used for individual table records, the default value is “default”, other supported encodings: “debezium_mysql”, “snowflake”, “kafka_connect_json_converter”, “pandas”
data – The data to insert
serialize – If True, the data will be serialized to JSON. True by default
- query_as_json(pipeline_name: str, query: str) Generator[dict, None, None] [source]
Executes an ad-hoc query on the specified pipeline and returns the result as a generator that yields rows of the query as Python dictionaries.
- Parameters:
pipeline_name – The name of the pipeline to query.
query – The SQL query to be executed.
- Returns:
A generator that yields each row of the result as a Python dictionary, deserialized from JSON.
- query_as_parquet(pipeline_name: str, query: str, path: str)[source]
Executes an ad-hoc query on the specified pipeline and saves the result to a parquet file. If the extension isn’t parquet, it will be automatically appended to path.
- Parameters:
pipeline_name – The name of the pipeline to query.
query – The SQL query to be executed.
path – The path including the file name to save the resulting parquet file in.
- query_as_text(pipeline_name: str, query: str) Generator[str, None, None] [source]
Executes an ad-hoc query on the specified pipeline and returns a generator that yields lines of the table.
- Parameters:
pipeline_name – The name of the pipeline to query.
query – The SQL query to be executed.
- Returns:
A generator yielding the query result in tabular format, one line at a time.
feldera.rest.pipeline module
feldera.rest.sql_table module
feldera.rest.sql_view module
Module contents
This is the lower level REST client for Feldera.
This is a thin wrapper around the Feldera REST API.
It is recommended to use the higher level abstractions in the feldera package, instead of using the REST client directly.