Include files (.incl)

Include files (.incl) help separate connector settings and reuse them across multiple .datasource files or .pipe templates.

Include files are referenced using INCLUDE instruction.

Connector settings

Use .incl files to separate connector settings from .datasource files.

For example, the following .incl file contains Kafka Connector settings:

tinybird/datasources/connections/kafka_connection.incl
KAFKA_CONNECTION_NAME my_connection_name
KAFKA_BOOTSTRAP_SERVERS my_server:9092
KAFKA_KEY my_username
KAFKA_SECRET my_password

While the .datasource file only contains a reference to the .incl file using INCLUDE:

tinybird/datasources/kafka_ds.datasource
SCHEMA >
`value` String,
`topic` LowCardinality(String),
`partition` Int16,
`offset` Int64,
`timestamp` DateTime,
`key` String

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(timestamp)"
ENGINE_SORTING_KEY "timestamp"

INCLUDE "connections/kafka_connection.incl"

KAFKA_TOPIC my_topic
KAFKA_GROUP_ID my_group_id

Pipe nodes

You can use .incl datafiles to reuse node templates.

For example, the following .incl file contains a node template:

tinybird/includes/only_buy_events.incl
NODE only_buy_events
SQL >
    SELECT
        toDate(timestamp) date,
        product,
        color,
        JSONExtractFloat(json, 'price') as price
    FROM events
    where action = 'buy'

The .pipe file starts with the INCLUDE reference to the template:

tinybird/endpoints/sales.pipe
INCLUDE "../includes/only_buy_events.incl"

NODE endpoint
DESCRIPTION >
    return sales for a product with color filter
SQL >
    %
    select date, sum(price) total_sales
    from only_buy_events
    where color in {{Array(colors, 'black')}}
    group by date

A different .pipe file can reuse the sample template:

tinybird/pipes/top_per_day.pipe
INCLUDE "../includes/only_buy_events.incl"

NODE top_per_day
SQL >
SELECT date,
        topKState(10)(product) top_10,
        sumState(price) total_sales
from only_buy_events
group by date

TYPE MATERIALIZED
DATASOURCE mv_top_per_day

Include with variables

You can templatize .incl files. For instance you can reuse the same .incl template with different variable values:

tinybird/includes/top_products.incl
NODE endpoint

DESCRIPTION >
    returns top 10 products for the last week

SQL >
    %
    select
        date,
        topKMerge(10)(top_10) as top_10
    from top_product_per_day

    {% if '$DATE_FILTER' == 'last_week' %}
        where date > today() - interval 7 day
    {% else %}
        where date between {{Date(start)}} and {{Date(end)}}
    {% end %}

    group by date

The $DATE_FILTER parameter is a variable in the .incl file. The following examples show how to create two separate endpoints by injecting a value for the DATE_FILTER variable.

The following .pipe file references the template using a last_week value for DATE_FILTER:

tinybird/endpoints/top_products_last_week.pipe
INCLUDE "../includes/top_products.incl" "DATE_FILTER=last_week"

Whereas the following .pipe file references the template using a between_dates value for DATE_FILTER:

tinybird/endpoints/top_products_between_dates.pipe
INCLUDE "../includes/top_products.incl" "DATE_FILTER=between_dates"

Include with environment variables

Because you can expand INCLUDE files using the Tinybird CLI, you can use environment variables.

For example, if you have configured the KAFKA_BOOTSTRAP_SERVERS, KAFKA_KEY, and KAFKA_SECRET environment variables, you can create an .incl file as follows:

tinybird/datasources/connections/kafka_connection.incl
KAFKA_CONNECTION_NAME my_connection_name
KAFKA_BOOTSTRAP_SERVERS ${KAFKA_BOOTSTRAP_SERVERS}
KAFKA_KEY ${KAFKA_KEY}
KAFKA_SECRET ${KAFKA_SECRET}

You can then use the values in your .datasource datafiles:

tinybird/datasources/kafka_ds.datasource
SCHEMA >
`value` String,
`topic` LowCardinality(String),
`partition` Int16,
`offset` Int64,
`timestamp` DateTime,
`key` String

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(timestamp)"
ENGINE_SORTING_KEY "timestamp"

INCLUDE "connections/kafka_connection.incl"

KAFKA_TOPIC my_topic
KAFKA_GROUP_ID my_group_id

Alternatively, you can create separate .incl files per environment variable:

tinybird/datasources/connections/kafka_connection_prod.incl
KAFKA_CONNECTION_NAME my_connection_name
KAFKA_BOOTSTRAP_SERVERS production_servers
KAFKA_KEY the_kafka_key
KAFKA_SECRET ${KAFKA_SECRET}
tinybird/datasources/connections/kafka_connection_stg.incl
KAFKA_CONNECTION_NAME my_connection_name
KAFKA_BOOTSTRAP_SERVERS staging_servers
KAFKA_KEY the_kafka_key
KAFKA_SECRET ${KAFKA_SECRET}

And then include both depending on the environment:

tinybird/datasources/kafka_ds.datasource
SCHEMA >
`value` String,
`topic` LowCardinality(String),
`partition` Int16,
`offset` Int64,
`timestamp` DateTime,
`key` String

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(timestamp)"
ENGINE_SORTING_KEY "timestamp"

INCLUDE "connections/kafka_connection_${TB_ENV}.incl"

KAFKA_TOPIC my_topic
KAFKA_GROUP_ID my_group_id

Where $TB_ENV is one of stg or prod.

See deploy to staging and production environments to learn how to leverage environment variables.

Was this page helpful?
Updated