Most organizations don't fail at AI because they lack the technology. They fail because they never built the governance to go with it.

I help governments, nonprofits, and foundations adopt AI in ways that hold up — to public scrutiny, to mission accountability, and to the communities they exist to serve.

Work with organizations including:

United Nations  ·  City of Portland  ·  Oregon Department of Education  ·  League of Oregon Cities  ·  National Policy Consensus Center  ·  Islamic Relief  ·  Indiana University Lilly Family School of Philanthropy

I'm Rafeel Wasif — a researcher, writer, and advisor who studies what happens to mission-driven organizations when AI starts making decisions that used to require human judgment. My work develops practical frameworks for AI governance that organizations can actually use: tools for auditing whether AI systems serve stated values, for identifying where automation erodes institutional capacity, and for building accountability infrastructure that holds up before something goes wrong.

Assistant Professor of Public Administration · Mark O. Hatfield School of Government, Portland State University

Washington Post  ·  The Chronicle of Philanthropy  ·  The Conversation  ·  Fulbright Fellow  ·  $2.5M+ in Research Funding  ·  3 Books, Edward Elgar Publishing  ·  ARNOVA Emerging Scholar

Rafeel Wasif

AI adoption fails in mission-driven organizations in predictable ways. These three show up everywhere.

Algorithmic Mission Drift

Organizations don't lose their values to bad intent. They lose them to automated decisions that accumulate faster than anyone reviews them. What gets optimized gradually diverges from what the mission actually requires — and by the time it's visible, it's expensive to reverse.

Deliberative Atrophy

When AI handles more and more decisions, the human capacity to reason through those decisions atrophies. Organizations discover this at the worst possible moment: when something goes wrong and no one knows how to evaluate it without the system.

Algorithmic Discretion

Every AI system makes policy choices — about who gets served, which cases get flagged, what counts as success. These choices look technical. They are governance decisions expressed in code, and they need to be governed accordingly.


The Public Values Audit Matrix (PVAM) is a structured diagnostic tool for evaluating whether the AI systems an organization is using — or considering — are actually serving the public values it claims to hold. Applied in government and nonprofit governance contexts. Available as part of workshops and advisory engagements.

Books

How Mission-Driven Organizations Work with AI: A Cointelligence Framework

The theoretical and practical foundation of the Cointelligence framework — for scholars and practitioners in public administration, nonprofit management, and civic technology who need more than vendor talking points when making decisions about AI.

Nonprofit Collaborations in Diverse Communities (2024) · Understanding Muslim Philanthropy (2024)

A decade of empirical research on how organizations that operate under political pressure — stigmatized, racialized, or ideologically contested — build trust, sustain civic capacity, and maintain institutional legitimacy when the environment works against them. The same dynamics, it turns out, show up in AI governance.

Research & Policy Reports

Over $2.5 million in competitive funding from the Templeton Foundation, Islamic Relief, Indiana University's Lilly Family School of Philanthropy, the Muslim Legal Fund of America, and the Oregon Legislature has produced peer-reviewed articles, major national reports, and applied frameworks in use across government and nonprofit governance.

Published reports include the Muslim American Giving Report, the Muslim American Zakat Report, and the Pluralism in Muslim American Philanthropy Report — among the most comprehensive empirical analyses of faith-based civic participation in the United States.

Thought Leadership & Public Commentary

The research doesn't stay in journals.

Writing for the Washington Post, The Chronicle of Philanthropy, and The Conversation has reached practitioners, policymakers, and civic leaders navigating the same questions the research addresses — but who need answers they can act on next week, not in three years when a book comes out.

Selected pieces:

"Pakistan is seeking flood assistance — but not from foreign NGOs"

Washington Post · 2022 · with Anita Prakash

On how governments selectively accept international civil society and what it means for organizations trying to serve across borders.

"How Muslim Americans meet their charitable obligations"

The Chronicle of Philanthropy & The Conversation · 2022

On the empirical reality of faith-based giving — and why most philanthropy infrastructure is built for a donor profile that excludes significant civic participation.

"US Muslims gave more to charity than other Americans in 2020"

The Conversation · 2021

Data-driven public scholarship on philanthropic behavior during crisis — with implications for how foundations and governments understand civic capacity in underserved communities.

Peer-Reviewed Research

Published in Nonprofit and Voluntary Sector Quarterly, Public Administration, Journal of Public Affairs Education, Nonprofit Management and Leadership, VOLUNTAS, Voluntary Sector Review, Nonprofit Policy Forum, and Michigan Technology Law Review.

If something here resonates — the research, the frameworks, the public writing — and you're planning a convening, navigating an AI governance decision, or looking for a speaker who can make the work land with a non-academic audience, I'm happy to talk.