OpenAI Runs ChatGPT on a Single Postgres Instance
OpenAI powers 800 million ChatGPT users with one Postgres write instance. A deep look at TimescaleDB and what this says about database complexity.
Written by AI. Mike Sullivan
February 6, 2026

Photo: DevOps Toolbox / YouTube
Here's a fact that should make every architect who spent the last five years replacing Postgres with seven different NoSQL databases feel something: OpenAI is running ChatGPT—all 800 million simultaneous users of it—on Postgres. With a single primary write instance.
Not Cassandra. Not MongoDB. Not some exotic distributed system that requires a dedicated team just to keep the lights on. Postgres. The database we've been using since the '90s.
According to DevOps Toolbox's recent breakdown, OpenAI is hitting double-digit millisecond P99 latency and maintaining five nines of availability on what might be the most high-profile AI platform on the planet. They're doing this while the rest of the industry has convinced itself that scale requires complexity.
I find this fascinating not because Postgres is some underdog—it's one of the most battle-tested pieces of software in existence. I find it fascinating because it reveals how thoroughly we've been sold on the idea that modern scale problems require modern solutions. Sometimes the old solutions just work.
The Time Series Problem
But here's where things get interesting. The video introduces TimescaleDB, an open-source extension that transforms Postgres into a specialized time-series database. And before you roll your eyes at yet another database layer, consider what time-series data actually is.
Most databases store state. Your bank balance. Your username. When things change, the old data gets overwritten. Time-series databases don't care about state—they care about change. Every stock tick, every sensor reading, every heartbeat gets captured and indexed by time.
As the presenter puts it: "Think of a standard database like a photo. It shows you exactly what is happening right now. But a time series database is more like a movie. It's a stream of events."
The problem is that as your movie gets longer, searching through it becomes a nightmare. If you have a billion rows of temperature data and need the average from last Tuesday, a standard database has to scan the entire dataset. That's where specialized tools typically enter the picture.
TimescaleDB's pitch is simpler: don't leave Postgres. Just add this extension.
The InfluxDB Question
The obvious comparison is InfluxDB, another popular open-source time-series database. The presenter addresses this head-on, and I appreciate the honesty. InfluxDB is faster at low data cardinality. TimescaleDB performs better when complexity rises. InfluxDB is written in Rust, if that matters to you.
But here's the practical consideration: if you're already running Postgres—and if you're running any significant application, you probably are—TimescaleDB means you don't need to learn, maintain, and implement another technology. You keep your familiar SQL syntax, your existing relational schemas, and all your other Postgres extensions.
This is the kind of pragmatic engineering decision that doesn't make for exciting conference talks but saves actual time and money.
Hyper Tables and Compression
The technical meat of TimescaleDB centers on something called hyper tables. These act as virtual tables that automatically slice your data into manageable chunks. The analogy the presenter uses: instead of throwing a billion daily newspapers into one giant room, you create a new small room for each week of the year. When you need last Tuesday's data, you just walk into the Tuesday room.
The result? Queries that are reportedly 350 times faster than standard Postgres. That's not a typo.
Compression is where things get clever. TimescaleDB uses delta encoding for time-series data. It stores the first timestamp in full, then only stores the gap between subsequent entries. If you're recording data every minute on the clock, you don't need to store the full date and time for every single row. Just the delta.
"You don't actually need to store the full date and time for every single row. That's a lot of redundant text and numbers," the presenter explains. "Instead, time scale uses delta encoding."
This is engineering 101—identify redundancy and eliminate it. But it's engineering 101 done systematically at scale.
Continuous Aggregates
The other major feature is continuous aggregates, which address a problem with traditional materialized views. Standard Postgres materialized views store data physically and refresh periodically, but they rerun the entire query every single time. If 99% of your historical data never changes, that's a lot of wasted computation.
Continuous aggregates compute only the changes, only when they happen. It's incremental materialization—update the 1% that changed, leave the 99% alone.
For real-time analytics on large datasets, this matters. A lot.
The Broader Pattern
What interests me about this whole setup is the pattern it represents. We've spent the last decade adding complexity to our stacks, often for legitimate reasons but sometimes just because we could. Microservices. Service meshes. Multiple specialized databases. The tooling has gotten so complex that "platform engineer" is now a distinct job title.
Then here comes OpenAI, running one of the most demanding applications on the planet, using Postgres.
This doesn't mean NoSQL is a mistake or that specialized tools don't have their place. It means the default assumption—that scale requires exotic solutions—might be worth questioning more often than we do.
TimescaleDB isn't revolutionary technology. It's Postgres with some smart extensions for a specific use case. But "smart extensions for specific use cases" describes a lot of successful engineering. You don't always need to rearchitect everything.
The video also mentions pgvector, another extension that adds vector support to Postgres for AI applications. Again, same pattern: extend the reliable foundation rather than replace it.
Is every problem a Postgres problem? Of course not. But maybe more of them are than we've been led to believe. Especially if the alternative is maintaining five different database systems, each with its own operational overhead, failure modes, and expertise requirements.
OpenAI chose boring technology that scales. That should tell you something about what actually matters when the stakes are high.
— Mike Sullivan, Technology Correspondent
Watch the Original Video
I was using Postgres wrong this whole time
DevOps Toolbox
13m 30sAbout This Source
DevOps Toolbox
DevOps Toolbox is a rapidly growing YouTube channel that has amassed over 101,000 subscribers in just six months. This platform is tailored for tech enthusiasts and professionals who are keen on advancing their skills in DevOps, command line interfaces, Tmux, Neovim, and related areas. By offering 'a byte of tech knowledge every Friday,' DevOps Toolbox has become a vital resource for those aiming to keep abreast of the dynamic tech environment.
Read full source profileMore Like This
Surprising AI Updates Steal CES Thunder
AI news overshadows CES with ChatGPT Health, Meta drama, and more.
Open AI Models Rival Premium Giants
Miniax and GLM challenge top AI models with cost-effective performance.
Pentagon vs. Anthropic: The Fight Over AI Ethics
The Pentagon is threatening to designate Anthropic a supply chain risk after the AI company refused to remove safety guardrails from Claude.
Laravel Boost: The AI Revolution We Didn't Order
Explore Laravel Boost's AI-enhanced coding. Is it innovation or just another cycle?