A Day with Databricks: Highlights from the Data + AI World Tour in Atlanta
A couple of weeks ago, the Databricks Data + AI World Tour made a stop in Atlanta, where I live. Armed with a large cup of coffee and the invitation the Databricks team had kindly sent me, I showed up at the Courtland Hotel downtown nice and early, ready to spend the day learning and connecting with other local data folks.
These kinds of vendor events have become increasingly common lately, and I’d understand if you approached them with a healthy dose of skepticism. It’s easy to dismiss them as marketing exercises, yet that would miss their real value. Modern data platforms like Databricks are vast ecosystems that span ingestion, transformation, governance, analytics, and AI. No one person can fully grasp all that’s possible within them. These events are an opportunity to step back and see the full scope of a platform: how it’s evolving, how others are using it, and where new opportunities lie.
And to Databricks’ credit, the event was impressively organized. Sessions were thoughtfully curated for data engineers, analysts, ML practitioners, and leaders alike. The crowd was energetic, the demos were polished, and the vibe was genuinely collaborative. I left feeling that Databricks is not just expanding its product suite, but its vision of what a modern data platform can be.
What Stood Out
Over the course of the day, I attended seven sessions, explored the Expo booths, and had many hallway conversations about the Databricks ecosystem. Rather than recap every talk, I’ll focus on a few highlights that resonated with me. These aren’t necessarily the flashiest features, but the ones that best illustrate where Databricks is heading and why it matters.
Databricks One: Bringing Business Users into the Platform
Databricks One represents a major philosophical shift: it’s an interface designed for business users. Traditionally, Databricks has been the realm of data engineers and scientists: powerful, but technical. Databricks One changes that by offering a more accessible, unified entry point where non-technical users can explore, ask questions, and interact with data without having to touch a notebook or understand Spark.
This is significant because it addresses one of the biggest barriers in enterprise data work: the gap between data producers and consumers. Every organization struggles to make analytical insights accessible beyond the data team. By bringing business-friendly interfaces directly into the data platform, Databricks is taking a real step toward closing that gap. It’s not just about self-service analytics, but about eliminating the friction between where data lives and where decisions are made.
AI/BI Dashboards: The Platform as Its Own BI Tool
The new AI/BI Dashboards feature builds on that same theme. It gives Databricks users the ability to create dashboards directly within the platform, powered by the same data that drives models and transformations.
Some people I talked to mentioned that they use these dashboards as a quick way to visualize data before investing in a full BI implementation elsewhere. That is, as a fast, iterative step between development and production of dashboards. And that’s exactly what’s exciting here: Databricks Dashboards will probably not replace established BI tools like Power BI or Sigma, but they make it possible for analysts and engineers to explore and share insights quickly without context-switching to another environment.
Agent Bricks: Making Data Agents a Reality
The new Agent Bricks framework was one of the most intriguing announcements. It allows teams to build AI agents that can act on their data safely, securely, and natively within Databricks.
What’s novel here is not just that you can build an agent, but that you can do so on top of governed, versioned, production-grade data. That’s been one of the missing links in the “AI agent” story so far: most tools can connect to APIs or scraped data, but few operate on structured, trustworthy enterprise data in a way that respects permissions and lineage. Agent Bricks could make it possible to build intelligent assistants that operate responsibly inside your organization’s data boundary, which is exactly where most companies want them. I was impressed by the demo showing how quick and easy it is to build agents and leverage them to do valuable data work.
Lakebase: Bridging Analytical and Operational Worlds
Lakebase expands the lakehouse concept into operational territory. Historically, Databricks has been an analytical powerhouse, not an OLTP system. But Lakebase (based on Databrick’s acquisition of Neon earlier this year) introduces capabilities for serving low-latency, operational workloads on the same data that powers analytics.
This might sound like a technical detail, but it addresses one of the most persistent divides in data architecture: the separation between analytical and operational data. I’ve long been interested in how analytical data can be made available for operational use – not just read-only dashboards, but feeding live applications and workflows. If Lakebase fulfills its promise, it could dramatically simplify architectures by removing the need for redundant “sync” pipelines that push warehouse data back into production systems.
Unity Catalog: The Unifying Fabric
Finally, Unity Catalog continues to evolve as the centerpiece of the Databricks ecosystem. What I appreciate most about it is its openness: it’s not limited to Databricks-native assets. You can catalog data wherever it lives. That’s an important acknowledgment of reality. No organization has all its data in Databricks. Real-world data governance needs to be inclusive, cross-platform, and pragmatic. By allowing external assets to coexist in the same governance framework, Unity Catalog reflects a mature understanding of how data ecosystems actually function. For someone like me, who leads a Data Governance Practice, this is refreshing to see: a platform-based governance solution that meets organizations where they are, rather than forcing them into a proprietary mold.
Closing Thoughts
What impressed me most about the day wasn’t just any single feature: it was the coherence of the overall direction. Databricks is steadily evolving from a data engineering workbench into a comprehensive data and AI platform that serves everyone: engineers, analysts, data scientists, and business users alike. Each new feature contributes to a vision of a truly unified environment for working with data.
And that’s the real takeaway for me. The future of data work isn’t just faster queries or smarter models: it’s tighter integration, better governance, and a more seamless bridge between the people who build data systems and the people who use them.