About Me

I work on questions that emerge when AI moves faster than the organizations trying to use it. Questions about risk, readiness, and what has to change beyond the technology itself.

My career has been a series of pivots that only make sense in retrospect. I started as a humanities major, earned a PhD in Epidemiology, and spent years in academic research publishing work that shaped federal nutrition policy. Then I moved into tech — first at Indeed, where I founded their Responsible AI function, then at Spotify, where I lead Trust & Safety research serving 500M+ users.

The thread connecting all of it: I'm drawn to problems where systems meet people. Where the technical and the human collide. Where the answer isn't just "build better technology" but "understand how organizations actually change."

As a Fellow at Harvard's Berkman Klein Center, I co-created AI Blindspot, a practical framework helping organizations identify structural blind spots in AI systems. What started as a deck of cards became workshops used by design justice advocates and civil society groups worldwide.

These days, I'm focused on a question most AI strategy ignores: what happens to the workforce? Organizations race to adopt AI capabilities while underinvesting in the human systems — readiness, decision authority, governance — that determine whether those capabilities create value or chaos.

I write and speak about how institutions can get this right. If you're navigating these challenges, I'd welcome the conversation.

Let’s connect