If you have a real-time dashboard in your application or plan on building one, you can improve it with LLMs. There are a ton of ways to add AI features to your real-time dashboards, but here, I'm going to focus on filtering.
You know what a dashboard filter is: the little pills, checkboxes, and dropdowns you can click to filter the results. A proper real-time dashboard will update to show filtered results almost immediately.
But let's say you have a lot of filter dimensions. Sidebars and filter drawers get clunky in this case. Better to just have a single text input. Pass the input to an LLM, and have it generate the filters, like this:
Here's how you build that, step-by-step:
Context, data, and prerequisites
Before I dive into the implementation, let's set the context. We are going to build a dashboard filter component that:
- Uses an LLM to parse a free text user input and apply filters to a real-time dashboard
- Refreshes the dashboard very quickly
- Filters performantly even when the underlying dataset becomes very large
- Can handle large sets of dimensions with high cardinality
For this tutorial, I'm riffing on this open source LLM Performance Tracker template by Tinybird, which includes a natural language filter feature (see the video above).
The underlying data for this dashboard has the following schema:
You can see it's storing a bunch of performance and metadata for LLM call events.
The live demo allows you to select values for the following filter dimensions:
- model
- provider
- organization
- project
- environment
When you click a specific model, for example, the dashboard will update to only show metrics for that model.
Prerequisites
I'm going to assume that you already have a dashboard you want to filter, so you can apply these steps generally to your use case. If you want to create a quick data project to follow along, use these commands to bootstrap something quick with Tinybird:
That will deploy a basic Tinybird datasource and API endpoint on your local machine with 100,000 rows of data for testing.
Now, let's see how to replace "click-to-filter" with "prompt-to-filter"...
Step 1. Review your API
I'm assuming that you have an API route for your real-time dashboard that can accept various parameters to request filtered data to visualize in the dashboard. Something like this:
In Tinybird, for example, any SQL pipe you build is automatically deployed as a REST endpoint with optional query parameters.
My Tinybird API definition looks like this:
A quick summary of this API:
- It uses Tinybird's pipe syntax, defining a single SQL node to select from the
llm_events
table. - It returns time series aggregations, grouped by
date
andcategory
, of various LLM call metrics such as errors, total tokens, completion tokens, duration, and cost. - It accepts a
column
parameter that defines the grouping category (e.g., model, provider, etc.) - It accepts many filter parameters (e.g.
organization
,project
,model
) which are conditionally applied in theWHERE
clause if they are passed. - These parameters are defined using Tinybird's templating language.
So I can pass a value for any of these filter parameters, and Tinybird will query the database for data that matches those filters and return the response as a JSON payload that I can use to hydrate my chart.
In the past, I'd create a UI component in my dashboard to allow a user to select those filters. Here, we're using AI.
Step 2. Create an LLM filter API route
To start building your natural language filter, you need a POST route handler to accept the user prompt and return structured filter parameters.
The API route should implement the following logic:
- Accept a JSON payload with
prompt
and (optionally)apiKey
fields (if you want the user to supply their own AI API key) - Fetches the available dimensions for filtering
- Define a system prompt to guide the LLM in creating structure parameters for the response
- Queries an LLM client with the API key, system prompt, and user prompt
- Returns the LLM response (which should be a structured filter object as JSON)
- Error handling, of course
If you want to see a full implementation of such an API route, just look at this. If you want step-by-step guidance, follow along.
Step 3. Define the system prompt
Perhaps the most important part of this is creating a good system prompt for the LLM. The goal is to have an LLM client that will accept user input and consistently output structured query parameters to pass to your dashboard API.
Here's a simple but effective system prompt example:
You could further extend this system prompt by passing available dimensions and example values. To make this work, you can query the underlying data. A Tinybird API works well for this:
This queries the underlying dataset (latest month of data) and returns an array of possible values for each of the five filter dimensions defined in the API.
This API can be used to show the LLM what is available.
You could create a little utility to fetch the dimensions and unique values:
And then call that to define the system prompt dynamically:
Step 4. Create the LLM client
Once you've defined a good system prompt, it's as simple as creating an LLM client in the API route and passing the system prompt + prompt.
For example:
Step 5. Capture and pass the user prompt
I'm not going to share how to build a UI input component to capture the user prompt. It's 2025, and any LLM can 1-shot that component for you.
But the idea here is that your API route should accept the prompt input when the user submits the input.
For example, here's a basic way to call the LLM filter API route (/search
) within a function triggered by an Enter key event handler:
Step 6. Update the filters based on the API response
After you've passed your user input to the LLM and gotten a response from the API route, you just need to fetch your dashboard API with the new set of filter parameters.
For example, taking the response from the above handleSearch
function:
In this case, we add the new filter params to the URL of the dashboard and use the useSearchParams
hook in the chart components, updating each chart with the applied search params.
Step 7. Test it
So far, we have:
- Created an API route that accepts a user input, passes it to an LLM with a system prompt, and returns a structured filter JSON
- Added a user input component that passes the prompt to the API route
- Updated the filter parameters in the URL search params based on the API response
So, looking back at the data model, let's imagine we used the following text input:
The search API should return something like this:
Which would update the URL of the dashboard to:
Which would trigger a new fetch of the Tinybird API for our time series chart:
Giving us an API response that looks something like this:
Which we can use to hydrate the chart. Boom.
Performance
A real-time dashboard should filter quickly. With a typical click-to-filter approach, we don't need to worry about the LLM response. In fact, if you look at the statistics from the Tinybird API response above, you can see the filtered query took just 7 ms, querying about 5000 rows.
Of course, as events grow into the millions or billions, we might expect some performance degradation there, but there are plenty of strategies in Tinybird to maintain sub-second query response times even as data becomes massive. This is the benefit of using Tinybird.
As far as the LLM response, you can query the underlying Tinybird table to see how long the LLM takes to respond, on average:
By the way, the LLM Performance Tracker template on which I based this tutorial actually includes a filter selection to analyze your own LLM calls within the dashboard, which we can use to see this in action:
In my case, the LLM typically took under a second to respond. Taking a look at the network waterfall, I could see the actual response time of the /search
API route, for example:
In this particular case, the response was under 4 seconds. To be honest, that's not ideal for a real-time dashboard, but it's something that can be difficult to control when using a remote LLM.
To further improve the performance, you could consider something like WebLLM to run the LLM in the browser to perform this simple task. Cutting down on network times could improve performance significantly.
Conclusion
The way we search and visualize data is changing a lot thanks to AI. There are a lot of AI features you can add to your application, and a simple one I've shown here is natural language filtering of real-time analytics dashboards.
If you'd like to see a complete example implementation of natural language filtering, check out the LLM Performance Tracker by Tinybird. It's an open source template to monitor LLM usage, and it includes (as I have shown here) a feature to enable natural language filtering on LLM call data.
You can use it as a reference for your own natural language filtering project, or fork it to deploy your own LLM tracker, or just use the hosted public version if you want to track LLM usage in your application.
For example:
Alternatively, check out Dub.co, an open source shortlink platform. They have a nice "Ask AI" that you can use for reference. Here's the repo.
Dub's "Ask AI" feature