About Me
I work on questions that emerge when AI adoption outpaces organizational strategy. Questions about governance, incentives, and what has to change beyond the technology itself.
My career has been a series of pivots that only make sense in retrospect. I started as a humanities major, earned a PhD in Epidemiology, and spent years in academic research publishing work that shaped federal nutrition policy. Then I moved into tech — first at Indeed, where I founded their Responsible AI function, then at Spotify, where I lead Trust & Safety research serving 500M+ users.
The thread connecting all of it: I'm drawn to problems where systems meet people. Where the technical and the human collide. Where the answer isn't just "build better technology" but "understand how organizations actually change."
As a Fellow at Harvard's Berkman Klein Center, I co-created AI Blindspot, a practical framework helping organizations identify structural blind spots in AI systems. What started as a deck of cards became workshops used by design justice advocates and civil society groups worldwide.
These days, I'm focused on what happens when you look past the AI tools themselves. Some organizations have adopted AI widely but haven't changed how they work. Others are still figuring out where to start. In both cases, the sticking point is rarely what people think. It's not knowledge or tooling, it's incentives, governance, and how the organization is actually designed. I see this most clearly in institutions where professional identity and shared governance shape how change happens — universities, healthcare, professional services — but the pattern is broader than any one sector.
I write and speak about why AI initiatives stall and what it takes to move past the familiar failure modes. If you're navigating these questions, I'd welcome the conversation.