Conceptual overview of Tinybird

Tinybird is built around several core concepts that abstract the underlying infrastructure and data storage.

To make the most of Tinybird, it's important to understand these core concepts.

Develop locally, deploy remotely

Tinybird is a platform for building analytics features in your application. The typical workflow is as follows:

  1. You develop your data project locally using Tinybird Local and version control.
  2. Changes are iterated and tested locally before anything is deployed in Tinybird Cloud.
  3. When you're ready, you deploy your data project to Tinybird Cloud, which enables endpoints and sinks.
  4. In the Tinybird UI you can browse and query your data, check observability, and more.

The following diagram illustrates the development and usage flow, from local development to deployment in Tinybird Cloud.

Your projects live in workspaces

A workspace contains the resources, data, and state of your Tinybird project. You can have as many workspaces as you need. You can share data sources between workspaces. You can also invite users and define their role.

Each workspace contain at least a source of data, a processing resource, and an output destination. The following diagram illustrates the relationship between resources in a workspace.

Data enters through data sources

Data sources are how you ingest and store data in Tinybird. All your data lives inside a Data Source, and you write SQL queries against data sources. You can bulk upload or stream data into a data source.

You can bring data in from the following sources:

  • Files in your local file system.
  • Files in your cloud storage bucket.
  • Events sent to the Events API.
  • Events streamed from Kafka (coming soon).

Data sources are defined in .datasource files. See Datasource files for more information.

Support for Tinybird's managed connectors is coming soon. Connectors aren't currently available for v2.

Use pipes to process your data

Pipes are how you write SQL logic in Tinybird. They're a collection of one or more SQL queries chained together, or nodes. Pipes let you break larger queries down into smaller queries that are easier to read.

With pipes you can:

  • Publish API endpoints.
  • Create materialized views.
  • Create copies of data.

Pipes are defined in .pipe files. See Pipe files for more information.

Outputs are where your data goes

When your processed data is ready to be consumed by your application, you can publish it through API endpoints, or by exporting data to external systems on a scheduled or on-demand basis through sinks.

The following outputs are available:

  • API endpoints, which you can call using custom parameters from any application.
  • Sinks, which export data to external systems on a scheduled or on-demand basis.

Endpoints and sinks are defined in .pipe files. See Pipe files for more information.

Updated