Responsible AI

Responsible AI is about creating technology that aligns innovation with ethics to ensure fairness, accountability, and transparency. My work spans technology and civil society, with a focus on developing frameworks and strategies to mitigate bias, promote equity, and build trust in AI systems. While much of my industry work remains confidential, this page highlights key outreach efforts and collaborations that reflect my commitment to advancing Responsible AI.


Fellowship

During my tenure as a 2-time Fellow in AI Ethics & Governance at the Berkman Klein Center, I had the opportunity to collaborate with global leaders to address pressing challenges in Responsible AI development. A highlight of this work was co-founding AI Blindspot, a practical tool designed to help organizations identify and mitigate biases in AI systems.

What began as a simple deck of cards evolved into a set of learning materials and interactive workshops used by design justice advocates and civil society organizations worldwide. These resources empower organizations to identify blind spots in algorithmic systems and foster conversations about fairness, accountability, and explainability in AI.

 
 

Mentoring

I was proud to contribute to the growing field of Responsible AI as an inaugural mentor with All Tech Is Human. This nonprofit brings together diverse stakeholders to build a better, more inclusive tech future. As a mentor, I worked with professionals and students to navigate challenges such as implementing ethical AI frameworks, addressing algorithmic bias, and planning career transitions into Responsible AI.

In addition, I was featured in their Responsible Tech Guide. The guide showcases career journeys and offers advice for individuals to get involved in Responsible AI, Trust & Safety, and other responsible tech fields. This mentorship work reflects my commitment to ensuring the future of AI is shaped by diverse and thoughtful leaders.