Insights on Responsible AI Adoption from the Urban Institute

How can trust-based research organizations balance innovation and risk mitigation?

In the June 2025 Applied AI (XD131) course, Graham MacDonald, the Chief Information Officer (CIO) of the Urban Institute shared practical insights in developing the first AI policy at the Urban Institute while navigating the rapidly evolving generative AI (gen AI) landscape.

Exchange Design uses NotebookLM as a research and thinking partner. In addition to the slides and content included in this post courtesy of Graham MacDonald, you can also watch an AI-generated video overview below or listen to an AI-generated podcast of the session.

The Urban Institute is a policy research organization known for its high-quality, trusted evidence in support of changemakers. Urban recently developed and made publicly available a Generative AI Policy that seeks to provide guidelines to mitigate risks and protect Urban’s greatest asset: Its reputation for objective, independent work that reflects the highest ethical and quality standards.

Slide from XD131 introducing Graham MacDonald and the Urban Institute

Developing The Urban Institute's Cautious Yet Permissive AI Policy

As of the version updated on March 31, 2025, this approach emphasizes caution, particularly concerning external products and any "rights-impacting" uses, such as performance reviews, which directly affect staff careers and salaries. However, the policy takes a more permissive stance on third-party Gen AI tools, allowing their use as long as confidential information is not involved.

 A key recommendation is to always check AI output for accuracy and to use their internal Gen AI knowledge base when questions arise. Team discussions are also highly encouraged when using Gen AI for the first time, fostering open dialogue about both risks and opportunities.

Slide from XD131 providing an overview of gen AI at the Urban Institute

Create A Deliberate Policy Development Process

MacDonald shared how developing this organization-wide policy was a multi-year process, with a pilot in 2024 with 8 AI tools with 50 pilot testers, and then doing a general rollout in 2025 across the organization. In his role as CIO, this included activities such as:

  • Board presentations.

  • Pilot programs with eight different Gen AI tools, leading to the eventual rollout of six approved tools.

  • Feedback from both senior and junior advisory boards.

  • Surveys to understand staff usage and needs, with pilot participants required to share their learnings.

  • Strict guardrails during the pilot phase.

However, MacDonald also highlighted that the organization made a conscious decision at the executive level to be more permissive and experimental, focusing only on the most critical risks to avoid over-complicating the rules and stifling experimentation.

But the rules were less permissive when applied to people and careers, for example. Urban implemented specific rules for Gen AI in HR: Employees can use Gen AI to help write their self-reviews, but supervisors cannot use it to directly write the reviews of those they manage. AI can provide feedback on a draft, but the core content must be human-generated.

Slide from XD131 providing an 18-month overview of the development process

Foster a Culture of Experimentation and Shared Learning

Beyond policies, the Urban Institute actively encourages experimentation with non-confidential data. They foster an environment where useful new tools can be proposed for organizational approval. A significant focus is placed on shared learning, recognizing that many recent graduates already possess extensive experience with AI tools.

To facilitate this, the Urban Institute leverages AI transcription tools like Zoom AI Companion and Microsoft Teams Premium. Staff are required to record tips and best practices, which are then uploaded to a central Box hub. This helped create an enterprise-level AI search capacity over all recordings and transcripts, making it easy for users to find answers and learn from colleagues' experiences. Early results indicate that most users now prefer the AI search over watching full videos.

Slide from XD131 summarizing key policy elements of gen AI adoption at Urban Institute

Embrace Continuous Learning and Strategic Pilots

MacDonald stressed that everyone is still learning about Gen AI due to the rapid pace of model updates. He urged individuals to spend time experimenting with different AI tools and not to give up after the first try, as models are continuously improving.

He strongly advocates for team discussions on the risks and opportunities of Gen AI, suggesting a series of guiding questions to narrow down broad concerns to specific use cases. A key recommendation is to create strategic pilots of AI tools in safe, controlled environments. This not only serves as a professional development opportunity but also allows organizations to test new efficiencies, such as dramatically reducing the time for literature reviews from 80 hours to 4-8 hours. Such pilots can demonstrate tangible results to those less enthusiastic about AI, connecting innovation to the organization's strategic vision and competitiveness.

Key areas for these controlled experiments include:

  • Become a Gen AI Expert in your discipline

  • Learn how to use Gen AI in programming

  • Learn to use literature review and deep research tools well

  • Learn to use Gen AI to improve your communications (through feedback)

  • Think openly about how things could be completely different if Gen AI makes a task radically easier.

Slide from XD131 with Graham MacDonald recommendations on using gen AI at work

Addressing Common Concerns: Privacy, Adoption, and Job Roles

During the discussion, Graham MacDonald addressed several participant questions:

  • Data Privacy of AI Meeting Recordings: He clarified that Urban internal recordings are kept internal with strict security protocols. For external meetings, it's crucial to check organizational and personal settings, as not all AI transcription tools are equally secure.

  • Safe Controlled AI Experiments for Non-Technical Users: He suggested starting with publicly available data, then fake data, and finally limited, non-sensitive organizational data only after a security review of the tool.

  • Gamifying AI Adoption: MacDonald expressed enthusiasm for the idea of "gamifying" AI adoption by having people evaluate tasks done with and without AI, creating healthy competition and encouraging experimentation.

  • Rapid Pace of AI Development: The Urban Institute manages this challenge by incentivizing their learning library, benefiting from being slightly behind the cutting edge (allowing them to learn from early adopters), leveraging vendor presentations, and encouraging internal piloting of promising new technologies.

  • Supporting Smaller Nonprofits: He advised leveraging networks of technical professionals (like the "CIO for Good" group) and embracing the advantage of being slightly behind to utilize already vetted tools.

  • Impact of AI on Job Roles: Graham MacDonald stated that the Urban Institute's approach, supported by their CEO, focuses on improving jobs and exploring new possibilities, rather than solely on cost-cutting. They prioritize open communication and transparency with staff to ease concerns about AI investments.

In conclusion, the Urban Institute's journey with generative AI highlights the importance of a deliberate policy development process, continuous staff learning, strategic experimentation, and a commitment to open dialogue to harness AI's potential for mission advancement and digital transformation.

If you would like to learn more about Graham MacDonald’s insights on generative AI, consider signing up for his biweekly LinkedIn Newsletter: AI Snippets.





Next
Next

How to Enable Smart, Fast, and Ethical AI for Nonprofits and Government Agencies