Data Source descriptions and beta testing of Parquet ingestion

Data Source descriptions

In a large Workspace that contains many Data Sources, you may want more information than just the name of the Data Source. Documentation matters. Now you can add a description to a Data Source like you already do with Pipes, Nodes and API Endpoints.

This feature is available through the UI and the CLI. Any new descriptions will propagate to shared sources.

Early beta support for Parquet files

At Tinybird we aim to capture and transform large amounts of data whatever the origin of the data or format. In addition to CSV and NDJSON, we're working on accepting Parquet format files. Parquet is an open-source, column-oriented data file format designed for efficient data storage and retrieval. It is also commonly used as an interchange format between data tools.

Our team is now testing ingesting data to Tinybird from Parquet. After further testing, this new format for ingestion will be included in our docs. If you'd like you join the beta reach out to us on Slack

CLI updates!

tb push --subset - We've added the tb push --subset option to be used with --populate so you can populate using only a subset of the data of between 0% and 10%. Now you can quickly validate a Materialized View with just a subset of your total dataset. You can check everything is working with a single month's worth of data even if you have several years' worth of data.

Data Source description - We've added the option of adding a description for a Data Source, as you already could for Pipes, thereby improving the documentation of your data project.

Endpoints from materialized Data Sources - We've fixed code so that Nodes whose type is materialized can no longer be published as API Endpoint. Logically the API Endpoint should depend on the target Data Source of the materialized Node, not the Node itself.

Check out the latest command-line updates in the changelog.