Skip to main content

Building a Live F1 Dashboard Using OpenF1 and Streamlit

Building a Live F1 Dashboard Using OpenF1 and Streamlit

Building a Live F1 Dashboard Using OpenF1 and Streamlit

Every lap of an F1 race generates more than 10 GB of telemetry data – enough to power a small city’s traffic‑control system. Imagine turning that torrent of live data into an interactive dashboard you can explore in seconds, no PhD in data engineering required. In this guide we’ll show you how to do exactly that with OpenF1 and Streamlit, giving you a hands‑on project that blends data science, machine learning and rapid web‑app development.

1️⃣ Getting Started – Setting Up the Environment

And the first thing we do is stack up the right tools. You’re gonna need Python 3.10 or newer. Run the following in your terminal:
pip install streamlit openf1-client pandas
The openf1-client package is a thin wrapper around the public REST API, so you don't have to write raw HTTP requests. Next, grab an API key by heading to the OpenF1 portal and signing up for a free token. Test it with a quick `curl` to see if the endpoint is talking back:
curl -H "Authorization: Bearer YOUR_TOKEN" https://openf1.org/api/v1/live
If you see a JSON blob with laps and telemetry, you’re good. Now create a project skeleton:
mkdir f1_dashboard
cd f1_dashboard
mkdir app data utils
touch app/__init__.py app/main.py
git init
Honestly, keeping the folder layout tidy pays off when you later add models, utils and sample data.

2️⃣ Pulling Live Telemetry – Working with the OpenF1 API

Understanding the data model is half the battle. OpenF1 splits everything into sessions, laps, car‑status, and driver‑metadata. A single lap is a time‑series of power, speed, throttle, brake, tyre pressure, and GPS. If you’re building a live feed, you’ll want to poll the `/live` endpoint every 1–2 seconds. The thing is, you’re not looking to chew up bandwidth, so stick to the recommended 5 Hz cadence. Parsing the JSON into a tidy `pandas` DataFrame is pretty straightforward:
import openf1.client as ofc
from datetime import datetime

client = ofc.OpenF1Client(token="YOUR_TOKEN")
live_data = client.get_live()
df = pd.json_normalize(live_data["telemetry"])
df["timestamp"] = pd.to_datetime(df["timestamp"])
You’ll notice the `timestamp` column is already ISO‑8601, so converting it to a `datetime` index is a breeze. From here, you can slice by driver, compute lap‑time deltas, or aggregate speeds.

3️⃣ Visualizing in Real‑Time with Streamlit (Code Walkthrough)

So here’s the trick: use `st.empty()` to create a placeholder that you can refresh in place. That saves a full page reload and keeps the user’s eye on the moving data. Below is a minimal “Live Lap‑Time Dashboard” that updates every two seconds:
import streamlit as st
import pandas as pd
import requests
import time
import joblib
from datetime import datetime

st.title("Live F1 Lap‑Time Dashboard")
st.sidebar.header("Settings")

# Load pre‑trained model
model = joblib.load("models/pitstop_rf.pkl")

placeholder = st.empty()
driver_selector = st.sidebar.multiselect("Drivers", options=["Hamilton", "Verstappen", "Leclerc"], default=["Hamilton"])

while True:
    resp = requests.get(
        "https://openf1.org/api/v1/live",
        headers={"Authorization": f"Bearer {st.secrets.OPENF1_TOKEN}"}
    )
    data = resp.json()
    df = pd.json_normalize(data["telemetry"])
    df["timestamp"] = pd.to_datetime(df["timestamp"])
    df = df[df["driver"].isin(driver_selector)]

    # Compute lap‑time delta
    df["lap_delta"] = df.groupby("driver")["timestamp"].diff().dt.total_seconds()

    # Feature engineering for ML
    features = df[["speed", "throttle", "brake", "tyre_temp"]]
    probs = model.predict_proba(features)[:, 1]
    df["pit_prob"] = probs

    chart_df = df.pivot(index="timestamp", columns="driver", values="lap_delta")
    placeholder.line_chart(chart_df)

    # Sidebar table of probabilities
    st.sidebar.subheader("Pit‑Stop Probability")
    st.sidebar.table(df[["driver", "pit_prob"]].drop_duplicates())

    time.sleep(2)
Notice how the chart updates smoothly and the sidebar shows the latest pit‑stop likelihood. Now you can add sliders, maps or any other widget – the architecture is ready for growth.

4️⃣ Adding Machine‑Learning Insights – Predicting Pit‑Stop Strategy

You've already seen the model sneak a peek into the future. In my experience, a simple `RandomForestClassifier` works surprisingly well for this use case, especially when trained on lagged features. Feature engineering is key: take the last 10 seconds of speed, acceleration, tyre temperature, and compute rolling means and standard deviations. Train offline on a clean dataset of past seasons, then pickle the estimator as shown above. When you call `model.predict_proba`, you get a confidence level that a pit‑stop will happen in the next lap. You can color‑code the line chart based on that probability, making the dashboard a real decision‑support tool for the team.

5️⃣ Why It Matters – Real‑World Impact of Live Data Dashboards

So what's the catch? The answer is simple: latency. Teams that can see telemetry in real time can adjust strategy on the fly, saving seconds that may turn into podium positions. Fans, too, get a richer experience, seeing the numbers that explain a driver’s move. And the skills you learn here – pulling, parsing, visualizing, and modeling live data – transfer to finance, IoT, or any field where speed of insight matters.

6️⃣ Actionable Takeaways & Next Steps

  • ✅ Set up the environment and API key.
  • ✅ Pull and parse live telemetry into DataFrames.
  • ✅ Build a Streamlit app with live charts and widgets.
  • ✅ Train a quick scikit‑learn model and drop it into the dashboard.
  • ✅ Deploy: push to Streamlit Community Cloud or Dockerize for 24/7 access.
Now, if you want to go further, play with deep‑learning time‑series models, add real‑time alerting via Slack, or even bundle the app into a mobile‑friendly dashboard. The possibilities are as open as the race track.

Frequently Asked Questions

Q1: How can I stream live F1 data with OpenF1 for free?

A: OpenF1 offers a public REST endpoint that returns JSON telemetry for the current session. By polling the `/live` endpoint every 1–2 seconds (respecting the rate limit), you can build a near‑real‑time feed without any paid subscription.

Q2: What is the easiest way to display live charts in Streamlit?

A: Use `st.empty()` to create a placeholder, then inside a loop call `placeholder.line_chart(df)` after each API fetch. Streamlit automatically refreshes the UI without a full page reload.

Q3: Can I use scikit‑learn models on streaming data?

A: Yes. Train the model offline on historical laps, then load the fitted estimator in the Streamlit app and call `model.predict_proba(new_features)` on each new telemetry snapshot.

Q4: How do I deploy a live Streamlit dashboard so friends can view it?

A: Push the repo to GitHub and connect it to Streamlit Community Cloud, or containerize the app with Docker and run it on any cloud VM (AWS, GCP, Azure). Remember to store the OpenF1 token as a secret environment variable.

Q5: What other sports or domains can I apply this live‑dashboard pattern to?

A: Any domain with a public streaming API—e.g., NBA play‑by‑play, stock‑market tick data, smart‑city IoT sensors—can reuse the same pull‑parse‑visualize‑ML pipeline demonstrated here.


Related reading: Original discussion

Related Articles

What do you think?

Have experience with this topic? Drop your thoughts in the comments - I read every single one and love hearing different perspectives!

Comments

Popular posts from this blog

2026 Update: Getting Started with SQL & Databases: A Comp...

Low-Code Isn't Stealing Dev Jobs — It's Changing Them (And That's a Good Thing) Have you noticed how many non-tech folks are building Mission-critical apps lately? Honestly, it's kinda wild — marketing tres creating lead-gen tools, ops managers deploying inventory systems. Sound familiar? But here's the deal: it's not magic, it's low-code development platforms reshaping who gets to play the app-building game. What's With This Low-Code Thing Anyway? So let's break it down. Low-code platforms are visual playgrounds where you drag pre-built components instead of hand-coding everything. Think LEGO blocks for software – connect APIs, design interfaces, and automate workflows with minimal typing. Citizen developers (non-IT pros solving their own problems) are loving it because they don't need a PhD in Java. Recently, platforms like OutSystems and Mendix have exploded because honestly? Everyone needs custom tools faster than traditional codin...

Practical Guide: Getting Started with Data Science: A Com...

Laravel 11 Unpacked: What's New and Why It Matters Still running Laravel 10? Honestly, you might be missing out on some serious upgrades. Let's break down what Laravel 11 brings to the table – and whether it's worth the hype for your PHP framework projects. Because when it comes down to it, staying current can save you headaches later. What's Cooking in Laravel 11? Laravel 11 streamlines things right out of the gate. Gone are the cluttered config files – now you get a leaner, more focused starting point. That means less boilerplate and more actual coding. And here's the kicker: they've baked health routing directly into the framework. So instead of third-party packages for uptime monitoring, you've got built-in /up endpoints. But the real showstopper? Per-second API rate limiting. Remember those clunky custom solutions for throttling requests? Now you can just do: RateLimiter::for('api', function (Request $ 💬 What do you think?...

Expert Tips: Getting Started with Data Tools & ETL: A Com...

{"text":""} 💬 What do you think? Have you tried any of these approaches? I'd love to hear about your experience in the comments!