Coming from Rockset? Read our migration guide.

Pipes API

New to Pipes? Read the Concepts > Pipes docs.

All Tinybird API Endpoints require authentication using a Token with the appropriate scope.

Example: Create a Tinybird Pipe

Imagine you have an events Data Source named app_events and you want to expose an API Endpoint containing the aggregation of events per day.

First, create a Pipe. Let's call this one events_per_day:

creating a pipe from a datasource
curl \
-H "Authorization: Bearer <token>" \
-d "name=events_per_day&sql=select * from app_events" \
https://api.tinybird.co/v0/pipes

As shown above, creating the Pipe also defines the first transformation Node SQL query, which in this case gets all the data from the Data Source. Below shows a successful request response after creating a Pipe:

Pipe successfully created
{
  "id": "t_1b501ccf34764a69aaf886bac9a7a6d8",
  "name": "events_per_day",
  "published_version": "t_878a81f0e1344a2ca990c8c1aa7dd69f",
  "published_date": "2019-06-14 09:46:07.884855",
  "nodes": [
    {
      "name": "events_per_day_0",
      "sql": "select * from app_events",
      "id": "t_878a81f0e1344a2ca990c8c1aa7dd69f",
      "dependencies": [
        "app_events"
      ],
      "materialized": false,
      "created_at": "2019-06-14 09:46:07.884849"
    }
  ]
}

Now, you're going to add a new transformation Node to perform the aggregation per date, using the previously-created Node events_per_day_0. Let's use the "append" endpoint.

appending a new node to transform your data
curl -d "select toDate(timestamp) date, count() event_count from events_per_day_0 group by date" \
  -H "Authorization: Bearer <token>" \
  https://api.tinybird.co/v0/pipes/events_per_day/nodes

Whenever a transformation node is added to a Pipe, you receive a response like the one below, summarizing and giving you an autogenerated name for the node, as well as some metadata such as the dependencies with other transformation Nodes:

successful response
{
  "name": "events_per_day_1",
  "sql": "select toDate(timestamp) date, count() event_count from events_per_day_0 group by date",
  "id": "t_5164622050b244338ea2b79c19bd1e57",
  "dependencies": [
    "events_per_day_0"
  ],
  "materialized": false,
  "created_at": "2019-06-14 09:58:08.311812"
}

In order to make a Pipe publicly accessible through the API, you need to enable your desired transformation Node as an API Endpoint. Remember, Pipes only support one enabled Node each, so enabling one will make previously-enabled Nodes inaccessible.

Enabling a transformation node as an API endpoint
curl \
  -X PUT \
  -H "Authorization: Bearer <token>" \
  -d t_878a81f0e1344a2ca990c8c1aa7dd69f \
  https://api.tinybird.co/v0/pipes/events_per_day/endpoint

When enabling a transformation Node as an API Endpoint, a JSON containing the full Pipe description is sent as the response.

Successful response
{
  "id": "t_1b501ccf34764a69aaf886bac9a7a6d8",
  "name": "events_per_day",
  "published_version": "t_5164622050b244338ea2b79c19bd1e57",
  "published_date": "2019-06-14 10:17:01.201962",
  "nodes": [
    {
      "name": "events_per_day_0",
      "sql": "select * from app_events",
      "id": "t_878a81f0e1344a2ca990c8c1aa7dd69f",
      "dependencies": [
        "app_events"
      ],
      "materialized": false,
      "created_at": "2019-06-14 10:17:01.201784"
    },
    {
      "name": "events_per_day_1",
      "sql": "select toDate(date) date, count() event_count from events_per_day_0 group by date",
      "id": "t_5164622050b244338ea2b79c19bd1e57",
      "dependencies": [
        "events_per_day_0"
      ],
      "materialized": false,
      "created_at": "2019-06-14 10:17:01.201943"
    }
  ]
}

Once the Pipe is created and you've enabled a transformation Node as an Endpoint, you can easily integrate it into any 3rd party application.

Using the Query API, you can query the Pipe using its name, just like a regular table in a SQL query: SELECT * FROM events_per_day where date > yesterday()

querying a pipe using SQL
curl \
-H "Authorization: Bearer <token>" \
-d 'SELECT * FROM events_per_day where date > yesterday()' \
'https://api.tinybird.co/v0/sql'

If you don't need to run any special operations against your Pipe, you can just use the Pipe data Endpoint accessible at https://api.tinybird.co/v0/pipes/events_per_day.json. It's an alias for SELECT * FROM events_per_day

Pipes are updated in real-time, so as you insert the new data in app_events Data Source, every Pipe using it events_per_day is updated.

To share this Endpoint, you can create a READ token. You can add a new token for the Pipe with:

adding a READ token to an endpoint pipe
curl -X POST "https://api.tinybird.co/v0/tokens/?name=events_per_day_token&scope=PIPES:READ:events_per_day"

Pipes

GET /v0/pipes/?

Get a list of pipes in your account.

getting a list of your pipes
curl -X GET \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    "https://api.tinybird.co/v0/pipes"

Pipes in the response will be the ones that are accessible using a particular token with read permissions for them.

Successful response
{
    "pipes": [{
        "id": "t_55c39255e6b548dd98cb6da4b7d62c1c",
        "name": "my_pipe",
        "description": "This is a description",
        "endpoint": "t_h65c788b42ce4095a4789c0d6b0156c3",
        "created_at": "2022-11-10 12:39:38.106380",
        "updated_at": "2022-11-29 13:33:40.850186",
        "parent": null,
        "nodes": [{
            "id": "t_h65c788b42ce4095a4789c0d6b0156c3",
            "name": "my_node",
            "sql": "SELECT col_a, col_b FROM my_data_source",
            "description": null,
            "materialized": null,
            "cluster": null,
            "tags": {},
            "created_at": "2022-11-10 12:39:47.852303",
            "updated_at": "2022-11-10 12:46:54.066133",
            "version": 0,
            "project": null,
            "result": null,
            "ignore_sql_errors": false
            "node_type": "default"
        }],
        "url": "https://api.tinybird.co/v0/pipes/my_pipe.json"
    }]
}
Request parameters

Key

Type

Description

dependencies

boolean

The response will include the nodes dependent data sources and pipes, default is false

attrs

String

comma separated list of the pipe attributes to return in the response. Example: attrs=name,description

node_attrs

String

comma separated list of the node attributes to return in the response. Example node_attrs=id,name

Pipes id’s are immutable so you can always refer to them in your 3rd party applications to make them compatible with Pipes once they are renamed.

For lighter JSON responses consider using the attrs and node_attrs params to return exactly the attributes you need to consume.

POST /v0/pipes/?

Creates a new Pipe. There are 3 ways to create a Pipe

Creating a Pipe providing full JSON
curl -X POST \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    -H "Content-Type: application/json" \
    "https://api.tinybird.co/v0/pipes" \
    -d '{
        "name":"pipe_name",
        "description": "my first pipe",
        "nodes": [
            {"sql": "select * from my_datasource limit 10", "name": "node_00", "description": "sampled data" },
            {"sql": "select count() from node_00", "name": "node_01" }
        ]
    }'

If you prefer to create the minimum Pipe, and then append your transformation nodes you can set your name and first transformation node’s SQL in your POST request

Creating a pipe with a name and a SQL query
curl -X POST \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    "https://api.tinybird.co/v0/pipes?name=pipename&sql=select%20*%20from%20events"

Pipes can be also created as copies of other Pipes. Just use the from argument:

Creating a pipe from another pipe
curl -X POST \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    "https://api.tinybird.co/v0/pipes?name=pipename&from=src_pipe"

Bear in mind, if you use this method to overwrite an existing Pipe, the endpoint will only be maintained if the node name is the same.

POST /v0/pipes/(.+)/nodes

Appends a new node to a Pipe.

adding a new node to a pipe
curl \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    -d 'select * from node_0' "https://api.tinybird.co/v0/pipes/:name/nodes?name=node_name&description=explanation"

Appends a new node that creates a Materialized View

adding a Materialized View using a materialized node
curl \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    -d 'select id, sum(amount) as amount, date from my_datasource' "https://api.tinybird.co/v0/pipes/:name/nodes?name=node_name&description=explanation&type=materialized&datasource=new_datasource&engine=AggregatingMergeTree"
Request parameters

Key

Type

Description

name

String

The referenceable name for the node.

description

String

Use it to store a more detailed explanation of the node.

token

String

Auth token. Ensure it has the PIPE:CREATE scope on it

type

String

Optional. Available options are {standard (default), materialized, endpoint}. Use materialized to create a Materialized View from your new node.

datasource

String

Required with type=materialized. Specifies the name of the destination Data Source where the Materialized View schema is defined.

override_datasource

Boolean

Optional. Default false When the target Data Source of the Materialized View exists in the Workspace it’ll be overriden by the datasource specified in the request.

populate

Boolean

Optional. Default false. When true, a job is triggered to populate the destination Data Source.

populate_subset

Float

Optional. Populate with a subset percent of the data (limited to a maximum of 2M rows), this is useful to quickly test a materialized node with some data. The subset must be greater than 0 and lower than 0.1. A subset of 0.1 means a 10 percent of the data in the source Data Source will be used to populate the Materialized View. Use it together with populate=true, it has precedence over populate_condition

populate_condition

String

Optional. Populate with a SQL condition to be applied to the trigger Data Source of the Materialized View. For instance, populate_condition='date == toYYYYMM(now())' it’ll populate taking all the rows from the trigger Data Source which date is the current month. Use it together with populate=true. populate_condition is not taken into account if the populate_subset param is present. Including in the populate_condition any column present in the Data Source engine_sorting_key will make the populate job process less data.

unlink_on_populate_error

String

Optional. Default is false. If the populate job fails the Materialized View is unlinked and new data won’t be ingested in the Materialized View.

engine

String

Optional. Engine for destination Materialized View. Requires the type parameter as materialized.

engine_*

String

Optional. Engine parameters and options. Requires the type parameter as materialized and the engine parameter. Check Engine Parameters and Options for more details

SQL query for the transformation node must be sent in the body encoded in utf-8

Response codes

Code

Description

200

No error

400

empty or wrong SQL or API param value

403

Forbidden. Provided token doesn’t have permissions to append a node to the pipe, it needs ADMIN or PIPE:CREATE

404

Pipe not found

409

There’s another resource with the same name, names must be unique | The Materialized View already exists | override_datasource cannot be performed

DELETE /v0/pipes/(.+)/nodes/(.+)

Drops a particular transformation node in the Pipe. It does not remove related nodes so this could leave the Pipe in an unconsistent state. For security reasons, enabled nodes can’t be removed.

removing a node from a pipe
curl -X DELETE "https://api.tinybird.co/v0/pipes/:name/nodes/:node_id"
Response codes

Code

Description

204

No error, removed node

400

The node is published. Published nodes can’t be removed

403

Forbidden. Provided token doesn’t have permissions to change the last node of the pipe, it needs ADMIN or IMPORT

404

Pipe not found

PUT /v0/pipes/(.+)/nodes/(.+)

Changes a particular transformation node in the Pipe

Editing a Pipe’s transformation node
curl -X PUT \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    -d 'select * from node_0' "https://api.tinybird.co/v0/pipes/:name/nodes/:node_id?name=new_name&description=updated_explanation"
Request parameters

Key

Type

Description

name

String

new name for the node

description

String

new description for the node

token

String

Auth token. Ensure it has the PIPE:CREATE scope on it

Please, note that the desired SQL query should be sent in the body encoded in utf-8.

Response codes

Code

Description

200

No error

400

Empty or wrong SQL

403

Forbidden. Provided token doesn’t have permissions to change the last node to the pipe, it needs ADMIN or PIPE:CREATE

404

Pipe not found

409

There’s another resource with the same name, names must be unique

GET /v0/pipes/(.+)\.(json|csv|ndjson|parquet)

Returns the published node data in a particular format.

Getting data for a pipe
curl -X GET \
    -H "Authorization: Bearer <PIPE:READ token>" \
    "https://api.tinybird.co/v0/pipes/:name.format"
Request parameters

Key

Type

Description

q

String

Optional, query to execute, see API Query endpoint

output_format_json_quote_64bit_integers

int

(Optional) Controls quoting of 64-bit or bigger integers (like UInt64 or Int128) when they are output in a JSON format. Such integers are enclosed in quotes by default. This behavior is compatible with most JavaScript implementations. Possible values: 0 — Integers are output without quotes. 1 — Integers are enclosed in quotes. Default value is 0

output_format_json_quote_denormals

int

(Optional) Controls representation of inf and nan on the UI instead of null e.g when dividing by 0 - inf and when there is no representation of a number in Javascript - nan. Possible values: 0 - disabled, 1 - enabled. Default value is 0

output_format_parquet_string_as_string

int

(Optional) Use Parquet String type instead of Binary for String columns. Possible values: 0 - disabled, 1 - enabled. Default value is 0

The q parameter is a SQL query (see Query API). When using this endpoint to query your Pipes, you can use the _ shortcut, which refers to your Pipe name

Available formats

format

Description

csv

CSV with header

json

JSON including data, statistics and schema information

ndjson

One JSON object per each row

parquet

A Parquet file. Some libraries might not properly process UInt* data types, if that’s your case cast those columns to signed integers with toInt* functions. String columns are exported as Binary, take that into account when reading the resulting Parquet file, most libraries convert from Binary to String (e.g. Spark has this configuration param: spark.sql.parquet.binaryAsString)

POST /v0/pipes/(.+)\.(json|csv|ndjson|parquet)

Returns the published node data in a particular format, passing the parameters in the request body. Use this endpoint when the query is too long to be passed as a query string parameter.

When using the post endpoint, there are no traces of the parameters in the pipe_stats_rt Data Source.

See the get endpoint for more information.

GET /v0/pipes/(.+\.pipe)

Get pipe information. Provided Auth Token must have read access to the Pipe.

Getting information about a particular pipe
curl -X GET \
    -H "Authorization: Bearer <PIPE:READ token>" \
    "https://api.tinybird.co/v0/pipes/:name"

pipe_id and pipe_name are two ways to refer to the pipe in SQL queries and API endpoints the only difference is pipe_id never changes so it’ll work even if you change the pipe_name (which is the name used to display the pipe). In general you can use pipe_id or pipe_name indistinctly:

Successful response
{
    "id": "t_bd1c62b5e67142bd9bf9a7f113a2b6ea",
    "name": "events_pipe",
    "pipeline": {
        "nodes": [{
            "name": "events_ds_0"
            "sql": "select * from events_ds_log__raw",
            "materialized": false
        }, {
            "name": "events_ds",
            "sql": "select * from events_ds_0 where valid = 1",
            "materialized": false
        }]
    }
}

You can make your Pipe’s id more descriptive by prepending information such as t_my_events_table.bd1c62b5e67142bd9bf9a7f113a2b6ea

DELETE /v0/pipes/(.+\.pipe)

Drops a Pipe from your account. Auth token in use must have the DROP:NAME scope.

Dropping a pipe
curl -X DELETE \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    "https://api.tinybird.co/v0/pipes/:name"
PUT /v0/pipes/(.+\.pipe)

Changes Pipe’s metadata. When there is another Pipe with the same name an error is raised.

editing a pipe
curl -X PUT \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    "https://api.tinybird.co/v0/pipes/:name?name=new_name"
Request parameters

Key

Type

Description

name

String

new name for the pipe

description

String

new Markdown description for the pipe

token

String

Auth token. Ensure it has the PIPE:CREATE scope on it

GET /v0/pipes/(.+)

Get pipe information. Provided Auth Token must have read access to the Pipe.

Getting information about a particular pipe
curl -X GET \
    -H "Authorization: Bearer <PIPE:READ token>" \
    "https://api.tinybird.co/v0/pipes/:name"

pipe_id and pipe_name are two ways to refer to the pipe in SQL queries and API endpoints the only difference is pipe_id never changes so it’ll work even if you change the pipe_name (which is the name used to display the pipe). In general you can use pipe_id or pipe_name indistinctly:

Successful response
{
    "id": "t_bd1c62b5e67142bd9bf9a7f113a2b6ea",
    "name": "events_pipe",
    "pipeline": {
        "nodes": [{
            "name": "events_ds_0"
            "sql": "select * from events_ds_log__raw",
            "materialized": false
        }, {
            "name": "events_ds",
            "sql": "select * from events_ds_0 where valid = 1",
            "materialized": false
        }]
    }
}

You can make your Pipe’s id more descriptive by prepending information such as t_my_events_table.bd1c62b5e67142bd9bf9a7f113a2b6ea

DELETE /v0/pipes/(.+)

Drops a Pipe from your account. Auth token in use must have the DROP:NAME scope.

Dropping a pipe
curl -X DELETE \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    "https://api.tinybird.co/v0/pipes/:name"
PUT /v0/pipes/(.+)

Changes Pipe’s metadata. When there is another Pipe with the same name an error is raised.

editing a pipe
curl -X PUT \
    -H "Authorization: Bearer <PIPE:CREATE token>" \
    "https://api.tinybird.co/v0/pipes/:name?name=new_name"
Request parameters

Key

Type

Description

name

String

new name for the pipe

description

String

new Markdown description for the pipe

token

String

Auth token. Ensure it has the PIPE:CREATE scope on it

GET /v0/pipes/(.+)/nodes/(.+)/explain

Return the explain plan and the whole debug query for the node.

If no node is specified, the most relevant node of the pipe will be used:

  • The endpoint node for endpoints.

  • The node that materializes for materialized views.

  • The copy node for Copy pipes.

  • The last node for general pipes.

It accepts query parameters to test the query with different values.

Getting the explain plan
curl -X GET \
    -H "Authorization: Bearer <PIPE:READ and DATASOURCE:READ token>" \
    "https://api.tinybird.co/v0/pipes/:pipe_name/nodes/:node_name/explain?department=Engineering"

or

curl -X GET \
    -H "Authorization: Bearer <PIPE:READ and DATASOURCE:READ token>" \
    "https://api.tinybird.co/v0/pipes/:pipe_name/explain?department=Engineering"
Successful response
{
    "debug_query": "SELECT country, department FROM (SELECT * FROM employees AS employees) AS an_endpoint_0 WHERE department = 'Engineering'",
    "query_explain": "Expression ((Projection + Before ORDER BY)) Filter ((WHERE + (Projection + Before ORDER BY))) ReadFromMergeTree (employees) Indexes: MinMax Condition: true Parts: 6/6 Granules: 6/6 Partition Condition: true Parts: 6/6 Granules: 6/6 PrimaryKey Condition: true Parts: 6/6 Granules: 6/6"
}
Request parameters

Key

Type

Description

token

String

Auth token. Ensure it has the PIPE:READ and the proper DATASOURCE:READ scopes on it.

pipe_name

Float

The name or id of the pipe.

node_name

String

Optional. The name or id of the node to explain. If not provided, the most relevant node of the pipe will be used.

params

String

Optional. The value of the parameters to test the query with. They are regular URL query parameters.

Response codes

Code

Description

200

No error

400

Could not get a node to run the explain plan

403

Forbidden. Provided token doesn’t have permissions to run the explain plan, it needs ADMIN or PIPE:READ and DATASOURCE:READ

404

Pipe not found, Node not found

GET /v0/pipes/(.+)/explain

Return the explain plan and the whole debug query for the node.

If no node is specified, the most relevant node of the pipe will be used:

  • The endpoint node for endpoints.

  • The node that materializes for materialized views.

  • The copy node for Copy pipes.

  • The last node for general pipes.

It accepts query parameters to test the query with different values.

Getting the explain plan
curl -X GET \
    -H "Authorization: Bearer <PIPE:READ and DATASOURCE:READ token>" \
    "https://api.tinybird.co/v0/pipes/:pipe_name/nodes/:node_name/explain?department=Engineering"

or

curl -X GET \
    -H "Authorization: Bearer <PIPE:READ and DATASOURCE:READ token>" \
    "https://api.tinybird.co/v0/pipes/:pipe_name/explain?department=Engineering"
Successful response
{
    "debug_query": "SELECT country, department FROM (SELECT * FROM employees AS employees) AS an_endpoint_0 WHERE department = 'Engineering'",
    "query_explain": "Expression ((Projection + Before ORDER BY)) Filter ((WHERE + (Projection + Before ORDER BY))) ReadFromMergeTree (employees) Indexes: MinMax Condition: true Parts: 6/6 Granules: 6/6 Partition Condition: true Parts: 6/6 Granules: 6/6 PrimaryKey Condition: true Parts: 6/6 Granules: 6/6"
}
Request parameters

Key

Type

Description

token

String

Auth token. Ensure it has the PIPE:READ and the proper DATASOURCE:READ scopes on it.

pipe_name

Float

The name or id of the pipe.

node_name

String

Optional. The name or id of the node to explain. If not provided, the most relevant node of the pipe will be used.

params

String

Optional. The value of the parameters to test the query with. They are regular URL query parameters.

Response codes

Code

Description

200

No error

400

Could not get a node to run the explain plan

403

Forbidden. Provided token doesn’t have permissions to run the explain plan, it needs ADMIN or PIPE:READ and DATASOURCE:READ

404

Pipe not found, Node not found