My first panel on nonprofit AI strategy

I participated on my first panel earlier in April and I had a blast! We were at Henry Street Settlement discussing how workforce development organizations can navigate AI—both as a tool for their own work and as a massive shift they need to prepare workers for.

One thing that struck me during our conversation: workforce development organizations are surfing a double wave 🌊 🌊. They're wondering how AI can improve their own workflows (automating client outreach, flagging good training candidates, improving retention), while ALSO building the curriculum and the advocacy that will influence how workers meet a more AI-influenced labor market. That's a lot!

My big takeaway to nonprofit staff: AI is a tool, not magic. It requires breaking down every problems into stages, identifying actual bottlenecks, and applying critical thinking about what's an AI job vs. a people job. Not everything needs automation—most decisions need human judgment, intuition, and expertise.

Also important to name: the environmental costs (data centers consuming massive amounts of water and energy) and the growing evidence that AI can impede learning and cognitive development, especially for young people. We need to approach these tools with our eyes open and our voices raised for protecting those more vulnerable.

It was great to be in conversation with other great leaders on this topic in NYC:
Greg Morris from NYCETC raised the policy and priority gaps at city and state levels, Neha Lalwani from WPTI discussed expanding access to trainings and learning opportunities like their AI Learning Lab, and Frances Steele from Pursuit led the group in a great use case exercise to identify high priority pain points that could benefit from AI automation.

I look forward to putting more of my thoughts down on paper these next two months and hope you will find them useful!

My key takeaways

AI is a tool in a toolkit, not magic. We still need a lot of critical thinking and problem solving to identify how it embeds into workflows and program experiences for clients.

Also, we have to be considerate of underserved populations and protect from the inherent risks like privacy breaches, bias, ethical violations, and environmental and societal externalities.

Next
Next

92% of Nonprofits Use AI. Only 7% See Major Impact. Here's Why That Makes Sense to Me.