At Harms Advisory Group, we protect the privacy of our clients. All case study Calibrations are anonymized to protect our partners' brand. What you’ll read here reflects real interventions, systemic truths, and outcome shifts, without compromising the relationships that made them possible
“AI is not a feature, it is a force function.”
-Tisha Hartman, Founder and CEO, Harms Advisory Group
A Canadian equity-centered venture with fewer than 25 employees and an annual budget just under $2M was experiencing rising national demand from women, Indigenous, and newcomer entrepreneurs, revealing critical strain in its manually run intake systems and time-limited advising capacity. For years, this not-for-profit handled client-intake one project at a time—manually, relationally, and in person. But as demand grew, so did the strain on staff and systems.
They were ready to modernize but not at the cost of empathy.
With a grant on the table, they needed a partner to articulate the case for change: to build a vision of how AI could meet operational needs without compromising the humanity of their work. Given that their client-base was a marginalized demographic, extra care had to be taken towards sensitivity, protection and bias prevention.
The design of the AI program had to fit with the demographic context of their clients; this required special consideration. The resulting platform—housed within their existing intake portal—was designed to reduce bottlenecks while preserving high-value advising where it mattered most.
By automating only the right steps, the system would protect client dignity while freeing staff time to advise on vetted applications.
This was a Scale-tier Engagement, delivered through our AI & Intelligent Operations Roadmap offering.
Harms authored the grant ready strategy and language that bridged nonprofit sensibilities with funder imperatives.
In a move towards modernization, here’s where we believe the introduction of the AI platform at the intake level would benefit this organization. Can you see areas where similar infrastructural improvements could help your own business scale?
If we stop designing for efficiency alone and begin designing AI systems for dignity and discernment, we enter a whole new terrain. The challenge isn’t just scale—it’s scaling with soul.
What this team needed wasn’t a tool that judged worthiness but one that noticed patterns of alignment. This wasn’t about automating more, but filtering more accurately and in minutes instead of days or weeks. The ideal solution was a system that could handle the volume of complex analysis without flattening the humanity in each proposal.
While the strategy prioritized humanity, Harms also engineered the technical foundation to reflect the same level of care. This wasn’t an out of the box solution, it was a system designed for trust.
The system’s ethics weren’t abstract concepts; they were enforced into the code at every step of the design.
At the time of writing, the grant and proposal for full system deployment are under review. But the strategy is already changing the conversation, with funders, staff, and stakeholders alike. We project that:
Some fear AI. Some run towards it. We believe in neither panic nor hype—but precision.
Used well, AI becomes an amplifier of discernment, not a shortcut. We know that AI is going to be a major feature of the future of business.
Especially in equity-driven work, it must be handled with care, oversight, and respect. Done right, AI becomes an operational multiplier and an ethical ally.
Time-to-shift: 6-12 months
Author: Jesse Harms, President
Human-led. AI-assisted. Always accountable.
“It’s about building an execution system that thinks with you like a second nervous system, embedded in your operations—fast, responsive, and deeply contextual.”
-Tisha Hartman, Founder and CEO, Harms Advisory Group
#Tier(s): Scale
#Engagement Name(s): AI & Intelligent Operations Roadmap
#Sector(s): Nonprofit, Equity-Focused, Economic Empowerment
#Competencies: AI Strategy, Process Design, Ethical Systems, Grant Alignment, Stakeholder Readiness
#Inflection Point(s): Scaling Pain, Systemic Bottlenecks, Trust Vulnerability