CLI command reference¶
The following list shows all available commands in the Tinybird command-line interface, their options, and their arguments.
For examples on how to use them, see the Quick start guide, Data projects, and Common use cases.
tb auth¶
Configure your Tinybird authentication.
auth commands
Command | Description |
---|---|
info OPTIONS | Get information about the authentication that is currently being used |
ls OPTIONS | List available regions to authenticate |
use OPTIONS REGION_NAME_OR_HOST_OR_ID | Switch to a different region. You can pass the region name, the region host url, or the region index after listing available regions with tb auth ls |
The previous commands accept the following options:
--token INTEGER
: Use auth Token, defaults to TB_TOKEN envvar, then to the .tinyb file--host TEXT
: Set custom host if it's different than https://api.tinybird.co. Check this page for the available list of regions--region TEXT
: Set region. Run 'tb auth ls' to show available regions--connector [bigquery|snowflake]
: Set credentials for one of the supported connectors--interactive,-i
: Show available regions and select where to authenticate to
tb branch¶
Manage your Workspace branches.
Branch commands
Command | Description | Options | |
---|---|---|---|
create BRANCH_NAME | Create a new Branch in the current 'main' Workspace |
| |
current | Show the Branch you're currently authenticated to | ||
data | Perform a data branch operation to bring data into the current Branch |
| |
datasource copy DATA_SOURCE_NAME | Copy data source from Main |
| |
ls | List all the Branches available | --sort / --no-sort : Sort the list of Branches by name. Disabled by default | |
regression-tests | Regression test commands |
| |
regression-tests coverage PIPE_NAME | Run regression tests using coverage requests for Branch vs Main Workspace. It creates a regression-tests job. The argument supports regular expressions. Using '.*' if no Pipe name is provided |
| |
regression-tests last PIPE_NAME | Run regression tests using coverage requests for Branch vs Main Workspace. It creates a regression-tests job. The argument supports regular expressions. Using '.*' if no Pipe name is provided |
| |
regression-tests manual PIPE_NAME | Run regression tests using coverage requests for Branch vs Main Workspace. It creates a regression-tests job. The argument supports regular expressions. Using '.*' if no Pipe name is provided |
| |
rm [BRANCH_NAME_OR_ID] | Removes a Branch from the Workspace (not Main). It can't be recovered | --yes : Do not ask for confirmation | |
use [BRANCH_NAME_OR_ID] | Switch to another Branch |
tb check¶
Check file syntax.
It only allows one option, --debug
, which prints the internal representation.
tb datasource¶
Data Sources commands.
Command | Description | Options |
---|---|---|
analyze OPTIONS URL_OR_FILE | Analyze a URL or a file before creating a new data source | |
append OPTIONS DATASOURCE_NAME URL | Appends data to an existing Data Source from URL, local file or a connector | |
connect OPTIONS CONNECTION DATASOURCE_NAME | Create a new Data Source from an existing connection |
|
copy OPTIONS DATASOURCE_NAME | Copy data source from Main |
|
delete OPTIONS DATASOURCE_NAME | Delete rows from a Data Source |
|
generate OPTIONS FILENAMES | Generate a Data Source file based on a sample CSV file from local disk or URL | --force : Override existing files |
ls OPTIONS | List Data Sources |
|
replace OPTIONS DATASOURCE_NAME URL | Replaces the data in a Data Source from a URL, local file or a connector |
|
rm OPTIONS DATASOURCE_NAME | Delete a Data Source | --yes : Do not ask for confirmation |
share OPTIONS DATASOURCE_NAME WORKSPACE_NAME_OR_ID | Share a Data Source |
|
sync OPTIONS DATASOURCE_NAME | Sync from connector defined in .datasource file | |
truncate OPTIONS DATASOURCE_NAME | Truncate a Data Source |
|
unshare OPTIONS DATASOURCE_NAME WORKSPACE_NAME_OR_ID | Unshare a Data Source |
|
scheduling resume DATASOURCE_NAME | Resume the scheduling of a Data Source | |
scheduling pause DATASOURCE_NAME | Pause the scheduling of a Data Source | |
scheduling status DATASOURCE_NAME | Get the scheduling status of a Data Source (paused or running) |
tb dependencies¶
Print all Data Sources dependencies.
Its options:
--no-deps
: Print only Data Sources with no Pipes using them--match TEXT
: Retrieve any resource matching the pattern--pipe TEXT
: Retrieve any resource used by Pipe--datasource TEXT
: Retrieve resources depending on this Data Source--check-for-partial-replace
: Retrieve dependant Data Sources that will have their data replaced if a partial replace is executed in the Data Source selected--recursive
: Calculate recursive dependencies
tb deploy¶
Deploy in Tinybird pushing resources changed from previous release using Git.
These are the options available for the deploy
command:
--dry-run
: Run the command with static checks, without creating resources on the Tinybird account or any side effect. Doesn't check for runtime errors.-f, --force
: Override Pipes when they already exist.--override-datasource
: When pushing a Pipe with a materialized Node if the target Data Source exists it will try to override it.--populate
: Populate materialized Nodes when pushing them.--subset FLOAT
: Populate with a subset percent of the data (limited to a maximum of 2M rows), this is useful to quickly test a materialized Node with some data. The subset must be greater than 0 and lower than 0.1. A subset of 0.1 means a 10% of the data in the source Data Source will be used to populate the Materialized View. Use it together with--populate
, it has precedence over--sql-condition
.--sql-condition TEXT
: Populate with a SQL condition to be applied to the trigger Data Source of the Materialized View. For instance,--sql-condition='date == toYYYYMM(now())'
it'll populate taking all the rows from the trigger Data Source whichdate
is the current month. Use it together with--populate
.--sql-condition
is not taken into account if the--subset
param is present. Including in thesql_condition
any column present in the Data Sourceengine_sorting_key
will make the populate job process less data.--unlink-on-populate-error
: If the populate job fails the Materialized View is unlinked and new data won't be ingested there. First time a populate job fails, the Materialized View is always unlinked.--wait
: To be used along with--populate
command. Waits for populate jobs to finish, showing a progress bar. Disabled by default.--yes
: Do not ask for confirmation.--workspace_map TEXT..., --workspace TEXT...
: Adds a Workspace path to the list of external Workspaces, usage:--workspace name path/to/folder
.--timeout FLOAT
: Timeout you want to use for the job populate.--user_token TOKEN
: The user Token is required for sharing a Data Source that contains the SHARED_WITH entry.
tb diff¶
Diffs local datafiles to the corresponding remote files in the Workspace.
It works as a regular diff
command, useful to know if the remote resources have been changed. Some caveats:
- Resources in the Workspace might mismatch due to having slightly different SQL syntax, for instance: A parenthesis mismatch,
INTERVAL
expressions or changes in the schema definitions. - If you didn't specify an
ENGINE_PARTITION_KEY
andENGINE_SORTING_KEY
, resources in the Workspace might have default ones.
The recommendation in these cases is use tb pull
to keep your local files in sync.
Remote files are downloaded and stored locally in a .diff_tmp
directory, if working with git you can add it to .gitignore
.
The options for this command:
--fmt / --no-fmt
: Format files before doing the diff, default is True so both files match the format--no-color
: Don't colorize diff--no-verbose
: List the resources changed not the content of the diff
tb fmt¶
Formats a .datasource, .pipe or .incl file.
Implementation is based in the ClickHouse® dialect of shandy-sqlfmt adapted to Tinybird datafiles.
These are the options available for the fmt
command:
--line-length INTEGER
: A number indicating the maximum characters per line in the Node SQL, lines will be split based on the SQL syntax and the number of characters passed as a parameter--dry-run
: Don't ask to override the local file--yes
: Do not ask for confirmation to overwrite the local file--diff
: Formats local file, prints the diff and exits 1 if different, 0 if equal
This command removes comments starting with # from the file, so use DESCRIPTION or a comment block instead:
Example comment block
% {% comment this is a comment and fmt will keep it %} SELECT {% comment this is another comment and fmt will keep it %} count() c FROM stock_prices_1m
You can add tb fmt
to your git pre-commit
hook to have your files properly formatted. If the SQL formatting results are not the ones expected to you, you can disable it just for the blocks needed. Read how to disable fmt.
tb init¶
Initializes folder layout.
It comes with these options:
--generate-datasources
: Generate Data Sources based on CSV, NDJSON and Parquet files in this folder--folder DIRECTORY
: Folder where datafiles will be placed-f, --force
: Overrides existing files-ir, --ignore-remote
: Ignores remote files not present in the local data project ontb init --git
--git
: Init Workspace with Git commits.--override-commit TEXT
: Use this option to manually override the reference commit of your Workspace. This is useful if a commit is not recognized in your Git log, such as after a force push (git push -f
).
tb materialize¶
Analyzes the node_name
SQL query to generate the .datasource and .pipe files needed to push a new materialize view.
This command guides you to generate the Materialized View with name TARGET_DATASOURCE, the only requirement is having a valid Pipe datafile locally. Use tb pull
to download resources from your Workspace when needed.
It allows to use these options:
---push-deps
: Push dependencies, disabled by default
--workspace TEXT...
: Add a Workspace path to the list of external Workspaces, usage:--workspace name path/to/folder
--no-versions
: When set, resource dependency versions are not used, it pushes the dependencies as-is--verbose
: Prints more log--unlink-on-populate-error
: If the populate job fails the Materialized View is unlinked and new data won't be ingested in the Materialized View. First time a populate job fails, the Materialized View is always unlinked.
tb pipe¶
Use the following commands to manage Pipes.
Command | Description | Options |
---|---|---|
append OPTIONS PIPE_NAME_OR_UID SQL | Append a Node to a Pipe | |
copy pause OPTIONS PIPE_NAME_OR_UID | Pause a running Copy Pipe | |
copy resume OPTIONS PIPE_NAME_OR_UID | Resume a paused Copy Pipe | |
copy run OPTIONS PIPE_NAME_OR_UID | Run an on-demand copy job |
|
data OPTIONS PIPE_NAME_OR_UID PARAMETERS | Print data returned by a Pipe. You can pass query parameters to the command, for example |
|
generate OPTIONS NAME QUERY | Generates a Pipe file based on a sql query. Example: tb pipe generate my_pipe 'select * from existing_datasource' | --force : Override existing files |
ls OPTIONS | List Pipes |
|
populate OPTIONS PIPE_NAME | Populate the result of a Materialized Node into the target Materialized View |
|
publish OPTIONS PIPE_NAME_OR_ID NODE_UID | Change the published Node of a Pipe | |
regression-test OPTIONS FILENAMES | Run regression tests using last requests |
|
rm OPTIONS PIPE_NAME_OR_ID | Delete a Pipe. PIPE_NAME_OR_ID can be either a Pipe name or id in the Workspace or a local path to a .pipe file | --yes : Do not ask for confirmation |
set_endpoint OPTIONS PIPE_NAME_OR_ID NODE_UID | Same as 'publish', change the published Node of a Pipe | |
sink run OPTIONS PIPE_NAME_OR_UID | Run an on-demand sink job |
|
stats OPTIONS PIPES | Print Pipe stats for the last 7 days | --format [json] : Force a type of the output. To parse the output, keep in mind to use tb --no-version-warning pipe stats option |
token_read OPTIONS PIPE_NAME | Retrieve a Token to read a Pipe | |
unlink OPTIONS PIPE_NAME NODE_UID | Unlink the output of a Pipe, whatever its type: Materialized Views, Copy Pipes, or Sinks. | |
unpublish OPTIONS PIPE_NAME NODE_UID | Unpublish the endpoint of a Pipe |
tb pull¶
Retrieve the latest version for project files from your Workspace.
With these options:
--folder DIRECTORY
: Folder where files will be placed--auto / --no-auto
: Saves datafiles automatically into their default directories (/datasources or /pipes). Default is True--match TEXT
: Retrieve any resourcing matching the pattern. eg--match _test
-f, --force
: Override existing files--fmt
: Format files, following the same format astb fmt
tb push¶
Push files to your Workspace.
You can use this command with these options:
--dry-run
: Run the command with static checks, without creating resources on the Tinybird account or any side effect. Doesn't check for runtime errors.--check / --no-check
: Enable/Disable output checking, enabled by default--push-deps
: Push dependencies, disabled by default--only-changes
: Push only the resources that have changed compared to the destination Workspace--debug
: Prints internal representation, can be combined with any command to get more information-f, --force
: Override Pipes when they already exist--override-datasource
: When pushing a Pipe with a materialized Node if the target Data Source exists it will try to override it.--populate
: Populate materialized Nodes when pushing them--subset FLOAT
: Populate with a subset percent of the data (limited to a maximum of 2M rows), this is useful to quickly test a materialized Node with some data. The subset must be greater than 0 and lower than 0.1. A subset of 0.1 means a 10 percent of the data in the source Data Source will be used to populate the Materialized View. Use it together with--populate
, it has precedence over--sql-condition
--sql-condition TEXT
: Populate with a SQL condition to be applied to the trigger Data Source of the Materialized View. For instance,--sql-condition='date == toYYYYMM(now())'
it'll populate taking all the rows from the trigger Data Source whichdate
is the current month. Use it together with--populate
.--sql-condition
is not taken into account if the--subset
param is present. Including in thesql_condition
any column present in the Data Sourceengine_sorting_key
will make the populate job process less data--unlink-on-populate-error
: If the populate job fails the Materialized View is unlinked and new data won't be ingested in the Materialized View. First time a populate job fails, the Materialized View is always unlinked--fixtures
: Append fixtures to Data Sources--wait
: To be used along with--populate
command. Waits for populate jobs to finish, showing a progress bar. Disabled by default--yes
: Do not ask for confirmation--only-response-times
: Checks only response times, when --force push a Pipe--workspace TEXT..., --workspace_map TEXT...
: Add a Workspace path to the list of external Workspaces, usage:--workspace name path/to/folder
--no-versions
: When set, resource dependency versions are not used, it pushes the dependencies as-is--timeout FLOAT
: Timeout you want to use for the populate job-l, --limit INTEGER RANGE
: Number of requests to validate [0<=x<=100]--sample-by-params INTEGER RANGE
: When set, we will aggregate thepipe_stats_rt
requests byextractURLParameterNames(assumeNotNull(url))
and for each combination we will take a sample of N requests [1<=x<=100]-ff, --failfast
: When set, the checker will exit as soon one test fails--ignore-order
: When set, the checker will ignore the order of list properties--validate-processed-bytes
: When set, the checker will validate that the new version doesn't process more than 25% than the current version--user_token TEXT
: The User Token is required for sharing a Data Source that contains the SHARED_WITH entry
tb sql¶
Run SQL query over Data Sources and Pipes.
--rows_limit INTEGER
: Max number of rows retrieved--pipeline TEXT
: The name of the Pipe to run the SQL Query--pipe TEXT
: The path to the .pipe file to run the SQL Query of a specific NODE--node TEXT
: The NODE name--format [json|csv|human]
: Output format--stats / --no-stats
: Show query stats
tb token¶
Manage your Workspace Tokens.
Command | Description | Options |
---|---|---|
copy OPTIONS TOKEN_ID | Copy a Token | |
ls OPTIONS | List Tokens | --match TEXT : Retrieve any Token matching the pattern. eg --match _test |
refresh OPTIONS TOKEN_ID | Refresh a Token | --yes : Do not ask for confirmation |
rm OPTIONS TOKEN_ID | Remove a Token | --yes : Do not ask for confirmation |
scopes OPTIONS TOKEN_ID | List Token scopes | |
create static OPTIONS TOKEN_NAME | Create a static Token that will forever. |
|
create jwt OPTIONS TOKEN_NAME | Create a JWT Token with a fixed expiration time. |
|
tb workspace¶
Manage your Workspaces.
Command | Description | Options |
---|---|---|
clear OPTIONS | Drop all the resources inside a project. This command is dangerous because it removes everything, use with care. |
|
create OPTIONS WORKSPACE_NAME | Create a new Workspace for your Tinybird user |
|
current OPTIONS | Show the Workspace you're currently authenticated to | |
delete OPTIONS WORKSPACE_NAME_OR_ID | Delete a Workspace where you are an admin |
|
ls OPTIONS | List all the Workspaces you have access to in the account you're currently authenticated to | |
members add OPTIONS MEMBERS_EMAILS | Adds members to the current Workspace | --user_token TEXT : When passed, we won't prompt asking for it |
members ls OPTIONS | List members in the current Workspace | |
members rm OPTIONS | Removes members from the current Workspace | --user_token TEXT : When passed, we won't prompt asking for it |
members set-role OPTIONS [guest|viewer|admin] MEMBERS_EMAILS | Sets the role for existing Workspace members | --user_token TEXT : When passed, we won't prompt asking for it |
use OPTIONS WORKSPACE_NAME_OR_ID | Switch to another workspace. Use 'tb workspace ls' to list the workspaces you have access to |
tb tag¶
Manage your Workspace tags.
Command | Description | Options |
---|---|---|
create TAG_NAME | Creates a tag in the current Workspace. | |
ls | List all the tags of the current Workspace. | |
ls TAG_NAME | List all the resources tagged with the given tag. | |
rm TAG_NAME | Removes a tag from the current Workspace. All resources will not be tagged by the given tag anymore. | --yes : Do not ask for confirmation |