Configure local testing

Testing your data project locally ensures that your resources are working as expected before you deploy your project to Tinybird.

There are several ways of generating test data for your local project. You can:

Check deployment

After you finish developing your project, run tb deploy --check to validate the deployment before creating it. This is a good way of catching potential breaking changes. See tb deploy for more information.

tb deploy --check    
Running against Tinybird Local

» Validating deployment...

» Changes to be deployed...

-----------------------------------------------------------------
| status   | name         | path                                |
-----------------------------------------------------------------
| modified | user_actions | datasources/user_actions.datasource |
-----------------------------------------------------------------

✓ Deployment is valid

Fixture files

Fixtures are NDJSON files that contain sample data for your project. Fixtures are stored inside the fixtures folder in your project.

my-app/
├─ datasources/
│  ├─ user_actions.datasource
│  └─ ...
├─ fixtures/
│  ├─ user_actions.ndjson
│  └─ ...

Every time you run tb build, the CLI checks for fixture files and includes them in the build. Fixture files must have the same name as the associated .datasource file.

Generate mock data

The tb mock command creates fixtures based on your data sources. See tb mock for more information.

For example, the following command creates a fixture for the user_actions data source.

tb mock user_actions

» Creating fixture for user_actions...
✓ /fixtures/user_actions.ndjson created
...

You can use the --prompt flag to add more context to the data that is generated. For example:

tb mock user_actions --prompt "Create mock data for 23 users from the US"`

Call the ingest APIs

Another way of testing your project is to call the local ingest APIs:

Obtain a token using tb token ls and call the local endpoint:

curl \
      -X POST 'http://localhost:7181/v0/events?name=<your_datasource>' \
      -H "Authorization: Bearer <your_token>" \
      -d $'<your_data>'

As you call the APIs, you can see errors and warnings in the console. Use this information to debug your datafiles.

Create a test suite

Once your project builds correctly, you can generate a test suite using tb test.

For example, the following command creates a test suite for the user_action_insights_widget pipe.

# Pass a pipe name to create a test
tb test create user_action_insights_widget

Then, customize the tests to fit your needs.

You can use the --prompt flag to add more context to the data that is generated. For example:

tb test create user_action_insights_widget --prompt "return user actions filtering by CLICKED"

The output of the command is a test suite file that you can find in the tests folder of your project.

- name: user_action_insights_widget_clicked
  description: Test the endpoint that returns user actions filtering by CLICKED
  parameters: action=CLICKED
  expected_result: |
    {"action":"CLICKED", "user_id":1, "timestamp":"2025-03-19T01:58:31Z"}
    {"action":"CLICKED", "user_id":2, "timestamp":"2025-03-20T05:34:22Z"}
    {"action":"CLICKED", "user_id":3, "timestamp":"2025-03-21T19:21:34Z"}

When creating tests, follow these guidelines:

  • Give each test a meaningful name and description that explains its purpose.
  • Define query parameters without quotes.
  • The expected_result should match the data object from your endpoint's response.
  • An empty string ('') in the expected_result means the endpoint returns no data. If an empty result is unexpected, verify your endpoint's output and update the test by running:
tb test update user_action_insights_widget

Once you have your test suite, you can run it using the tb test run command.

tb test run

» Running tests

* user_action_insights_widget_clicked.yaml
✓ user_action_insights_widget_clicked passed

✓ 1/1 passed

Next steps

Updated